If you try to use more than one migration method (for example, automatic migration then bulk user import), you may encounter a
DUPLICATED_USER
error. This error indicates that the user exists in Auth0’s internal user store but not in your tenant. To correct this error, delete the user with the Auth0 Management API Delete a Connection User endpoint and then re-attempt the import.Prerequisites
Before you launch the import users job:- Configure a database connection to import the users into and enable it for at least one application.
- If you are importing passwords, make sure the passwords are hashed using one of the supported algorithms. Users with passwords hashed by unsupported algorithms will need to reset their password when they log in for the first time after the bulk import.
- If you are importing enrollments, make sure they are a supported type:
email
,phone
, ortotp
. - Get a Management API token for job endpoint requests.
If you are using an export file from an Auth0 tenant, you must convert the exported file from
ndjson
to JSON. To keep the same user IDs, you must remove the auth0| prefix
from all imported user IDs.The import process automatically adds the auth0| prefix
to the imported user IDs. If you do not remove the auth0|
prefix before importing, the user IDs return as auth0|auth0|...
Create users JSON file
Create a JSON file with the user data you want to import into Auth0. How you export user data to a JSON file will vary depending on your existing user database. The endpoint expects sections of the JSON file. So instead of usingfs.readFileSync
, it requires fs.createReadStream
. The endpoint expects a piped read stream instead of the whole JSON file.
To learn more about the JSON file schema and see examples, read Bulk Import Database Schema and Examples.
The file size limit for a bulk import is 500KB. You will need to start multiple imports if your data exceeds this size.
Request bulk user import
To start a bulk user import job, make aPOST
request to the Create Import Users Job endpoint. Be sure to replace the MGMT_API_ACCESS_TOKEN
, USERS_IMPORT_FILE.json
, CONNECTION_ID
, and EXTERNAL_ID
placeholder values with your Management API , users JSON file, database connection ID, and external ID, respectively.
Parameter | Description |
---|---|
users | File in JSON format that contains the users to import. |
connection_id | ID of the connection to which users will be inserted. You can retrieve the ID using the GET /api/v2/connections endpoint. |
upsert | Boolean value; false by default. When set to false , pre-existing users that match on email address, user ID, phone, or username will fail. When set to true, pre-existing users that match on email address will be updated, but only with upsertable attributes. For a list of user profile fields that can be upserted during import, see User Profile Structure: User profile attributes. Note: Providing a duplicated user entry in the import file will cause an error. In this case, Auth0 will not do an insert followed by an update. |
external_id | Optional user-defined string that can be used to correlate multiple jobs. Returned as part of the job status response. |
send_completion_email | Boolean value; true by default. When set to true , sends a completion email to all tenant owners when the import job is finished. If you do not want emails sent, you must explicitly set this parameter to false . |
send_completion_email
was set to true
, the tenant administrator(s) will get an email notifying them that job either failed or succeeded. An email for a job that failed might notify the administrator(s) that it failed to parse the users JSON file when importing users.
Concurrent import jobs
The Create Import Users Job endpoint has a limit of two concurrent import jobs. Requesting additional jobs while there are two pending returns a429 Too Many Requests
response:
Check job status
To check a job’s status, make aGET
request to the Get a Job endpoint. Be sure to replace the MGMT_API_ACCESS_TOKEN
and JOB_ID
placeholder values with your Management API Access Token and user import job ID.
Job timeouts
All user import jobs timeout after two (2) hours. If your job does not complete within this time frame, it is marked as failed. Furthermore, all of your job-related data is automatically deleted after 24 hours and cannot be accessed afterward. As such, we strongly recommend storing job results using the storage mechanism of your choice.Retrieve failed entries
All of the job-related data is automatically deleted after 24 hours and cannot be accessed thereafter. Because of this, we strongly recommend storing the job results using the storage mechanism of your choice.
GET
request to the Get Job Error Details endpoint. Be sure to replace the MGMT_API_ACCESS_TOKEN
and JOB_ID
placeholder values with your Management API Access Token and user import job ID.
hash.value
will be redacted in the response.
- ANY_OF_MISSING
- ARRAY_LENGTH_LONG
- ARRAY_LENGTH_SHORT
- CONFLICT
- CONFLICT_EMAIL
- CONFLICT_USERNAME
- CONNECTION_NOT_FOUND
- DUPLICATED_USER
- ENUM_MISMATCH
- FORMAT
- INVALID_TYPE
- MAX_LENGTH
- MAXIMUM
- MFA_FACTORS_FAILED
- MIN_LENGTH
- MINIMUM
- NOT_PASSED
- OBJECT_REQUIRED
- PATTERN