mirror of
https://github.com/nitnelave/lldap.git
synced 2023-04-12 14:25:13 +00:00
server: Create schema command
This commit is contained in:
parent
80dfeb1293
commit
05dbe6818d
@ -6,52 +6,86 @@ NOTE: [pgloader](https://github.com/dimitri/pgloader) is a tool that can easily
|
|||||||
|
|
||||||
The process is as follows:
|
The process is as follows:
|
||||||
|
|
||||||
1. Create a dump of existing data.
|
1. Create empty schema on target database
|
||||||
2. Change all `CREATE TABLE ...` lines to `DELETE FROM tablename;`. We will later have LLDAP create the schema for us, so we want to clear out existing data to replace it with the original data.
|
2. Stop/pause LLDAP and dump existing values
|
||||||
3. Do any syntax fixes for the target db syntax
|
3. Sanitize for target DB (not always required)
|
||||||
4. Change your LLDAP config database_url to point to the new target and restart.
|
4. Insert data into target
|
||||||
5. After LLDAP has started, stop it.
|
5. Change LLDAP config to new target and restart
|
||||||
6. Execute the manicured dump file against the new database.
|
|
||||||
|
|
||||||
The steps below assume you already have PostgreSQL or MySQL set up with an empty database for LLDAP to use.
|
The steps below assume you already have PostgreSQL or MySQL set up with an empty database for LLDAP to use.
|
||||||
|
|
||||||
## Create a dump
|
## Create schema on target
|
||||||
|
|
||||||
First, we must dump the existing data to a file. The dump must be tweaked slightly according to your target db. See below for commands
|
LLDAP has a command that will connect to a target database and initialize the
|
||||||
|
schema. If running with docker, run the following command to use your active
|
||||||
### PostgreSQL
|
instance (this has the benefit of ensuring your container has access):
|
||||||
|
|
||||||
PostgreSQL uses a different hex string format and doesn't support `PRAGMA`.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
sqlite3 /path/to/lldap/config/users.db .dump | \
|
docker exec -it <LLDAP container name> /app/lldap create_schema -d <Target database url>
|
||||||
sed -r -e "s/X'([[:xdigit:]]+'[^'])/'\\\x\\1/g" \
|
|
||||||
-e 's/^CREATE TABLE IF NOT EXISTS "([^"]*)".*/DELETE FROM \1;/' \
|
|
||||||
-e '/^PRAGMA.*/d' > /path/to/dump.sql
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### MySQL
|
If it succeeds, you can proceed to the next step.
|
||||||
|
|
||||||
MySQL doesn't support `PRAGMA`.
|
## Create a dump of existing data
|
||||||
|
|
||||||
|
We want to dump (almost) all existing values to some file - the exception being the `metadata` table. Be sure to stop/pause LLDAP during this step, as some
|
||||||
|
databases (SQLite in this example) will give an error if LLDAP is in the middle of a write. The dump should consist just INSERT
|
||||||
|
statements. There are various ways to do this, but a simple enough way is filtering a
|
||||||
|
whole database dump. For example:
|
||||||
|
|
||||||
```
|
```
|
||||||
sqlite3 /path/to/lldap/config/users.db .dump | \
|
sqlite3 /path/to/lldap/config/users.db .dump | grep "^INSERT" | grep -v "^INSERT INTO metadata" > /path/to/dump.sql
|
||||||
-e 's/^CREATE TABLE IF NOT EXISTS "([^"]*)".*/DELETE FROM \1;/' \
|
|
||||||
-e '/^PRAGMA.*/d' > /path/to/dump.sql
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Generate New Schema
|
## Sanitize data
|
||||||
|
|
||||||
Modify your `database_url` in `lldap_config.toml` (or `LLDAP_DATABASE_URL` in the env) to point to your new database. Restart LLDAP and check the logs to ensure there were no errors connecting and creating the tables. After that, stop LLDAP. Now we can import our original data!
|
Some databases might use different formats for some data - for example, PostgreSQL uses
|
||||||
|
a different syntax for hex strings than SQLite. We also want to make sure inserts are done in
|
||||||
|
a transaction in case one of the statements fail.
|
||||||
|
|
||||||
|
### To PostgreSQL
|
||||||
|
|
||||||
|
PostgreSQL uses a different hex string format. The command below should switch SQLite
|
||||||
|
format to PostgreSQL format, and wrap it all in a transaction:
|
||||||
|
|
||||||
|
```
|
||||||
|
sed -i -r -e "s/X'([[:xdigit:]]+'[^'])/'\\\x\\1/g" \
|
||||||
|
-e '1s/^/BEGIN;\n/' \
|
||||||
|
-e '$aCOMMIT;' /path/to/dump.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### To MySQL
|
||||||
|
|
||||||
|
MySQL mostly cooperates, but it gets some errors if you don't escape the `groups` table. Run the
|
||||||
|
following command to wrap all table names in backticks for good measure, and wrap the inserts in
|
||||||
|
a transaction:
|
||||||
|
|
||||||
|
```
|
||||||
|
sed -i -r -e 's/^INSERT INTO ([a-zA-Z0-9_]+) /INSERT INTO `\1` /' \
|
||||||
|
-e '1s/^/START TRANSACTION;\n/' \
|
||||||
|
-e '$aCOMMIT;' /path/to/dump.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
## Insert data
|
||||||
|
|
||||||
|
Insert the data generated from the previous step into the target database. If you encounter errors,
|
||||||
|
you may need to manually tweak your dump, or make changed in LLDAP and recreate the dump.
|
||||||
|
|
||||||
### PostgreSQL
|
### PostgreSQL
|
||||||
|
|
||||||
`psql -d <database> -U <username> -W < /path/to/dump.sql`
|
`psql -d <database> -U <username> -W < /path/to/dump.sql`
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
`psql -d <database> -U <username> -W -f /path/to/dump.sql`
|
||||||
|
|
||||||
### MySQL
|
### MySQL
|
||||||
|
|
||||||
`mysql -u < -p <database> < /path/to/dump.sql`
|
`mysql -u <username> -p <database> < /path/to/dump.sql`
|
||||||
|
|
||||||
## Finish
|
|
||||||
|
|
||||||
If all succeeds, you're all set to start LLDAP with your new database!
|
## Switch to new database
|
||||||
|
|
||||||
|
Modify your `database_url` in `lldap_config.toml` (or `LLDAP_DATABASE_URL` in the env)
|
||||||
|
to point to your new database (the same value used when generating schema). Restart
|
||||||
|
LLDAP and check the logs to ensure there were no errors.
|
@ -26,6 +26,9 @@ pub enum Command {
|
|||||||
/// Send a test email.
|
/// Send a test email.
|
||||||
#[clap(name = "send_test_email")]
|
#[clap(name = "send_test_email")]
|
||||||
SendTestEmail(TestEmailOpts),
|
SendTestEmail(TestEmailOpts),
|
||||||
|
/// Create database schema.
|
||||||
|
#[clap(name = "create_schema")]
|
||||||
|
CreateSchema(RunOpts),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Parser, Clone)]
|
#[derive(Debug, Parser, Clone)]
|
||||||
@ -74,6 +77,10 @@ pub struct RunOpts {
|
|||||||
#[clap(long, env = "LLDAP_HTTP_URL")]
|
#[clap(long, env = "LLDAP_HTTP_URL")]
|
||||||
pub http_url: Option<String>,
|
pub http_url: Option<String>,
|
||||||
|
|
||||||
|
/// Database connection URL
|
||||||
|
#[clap(short, long, env = "LLDAP_DATABASE_URL")]
|
||||||
|
pub database_url: Option<String>,
|
||||||
|
|
||||||
#[clap(flatten)]
|
#[clap(flatten)]
|
||||||
pub smtp_opts: SmtpOpts,
|
pub smtp_opts: SmtpOpts,
|
||||||
|
|
||||||
|
@ -209,6 +209,10 @@ impl ConfigOverrider for RunOpts {
|
|||||||
if let Some(url) = self.http_url.as_ref() {
|
if let Some(url) = self.http_url.as_ref() {
|
||||||
config.http_url = url.to_string();
|
config.http_url = url.to_string();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if let Some(database_url) = self.database_url.as_ref() {
|
||||||
|
config.database_url = database_url.to_string();
|
||||||
|
}
|
||||||
self.smtp_opts.override_config(config);
|
self.smtp_opts.override_config(config);
|
||||||
self.ldaps_opts.override_config(config);
|
self.ldaps_opts.override_config(config);
|
||||||
}
|
}
|
||||||
|
@ -189,6 +189,38 @@ fn run_healthcheck(opts: RunOpts) -> Result<()> {
|
|||||||
std::process::exit(i32::from(failure))
|
std::process::exit(i32::from(failure))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn create_schema(database_url: String) -> Result<()> {
|
||||||
|
let sql_pool = {
|
||||||
|
let mut sql_opt = sea_orm::ConnectOptions::new(database_url.clone());
|
||||||
|
sql_opt
|
||||||
|
.max_connections(1)
|
||||||
|
.sqlx_logging(true)
|
||||||
|
.sqlx_logging_level(log::LevelFilter::Debug);
|
||||||
|
Database::connect(sql_opt).await?
|
||||||
|
};
|
||||||
|
domain::sql_tables::init_table(&sql_pool)
|
||||||
|
.await
|
||||||
|
.context("while creating base tables")?;
|
||||||
|
infra::jwt_sql_tables::init_table(&sql_pool)
|
||||||
|
.await
|
||||||
|
.context("while creating jwt tables")?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn create_schema_command(opts: RunOpts) -> Result<()> {
|
||||||
|
debug!("CLI: {:#?}", &opts);
|
||||||
|
let config = infra::configuration::init(opts)?;
|
||||||
|
infra::logging::init(&config)?;
|
||||||
|
let database_url = config.database_url;
|
||||||
|
|
||||||
|
actix::run(
|
||||||
|
create_schema(database_url).unwrap_or_else(|e| error!("Could not create schema: {:#}", e)),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
info!("Schema created successfully.");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
fn main() -> Result<()> {
|
fn main() -> Result<()> {
|
||||||
let cli_opts = infra::cli::init();
|
let cli_opts = infra::cli::init();
|
||||||
match cli_opts.command {
|
match cli_opts.command {
|
||||||
@ -196,5 +228,6 @@ fn main() -> Result<()> {
|
|||||||
Command::Run(opts) => run_server_command(opts),
|
Command::Run(opts) => run_server_command(opts),
|
||||||
Command::HealthCheck(opts) => run_healthcheck(opts),
|
Command::HealthCheck(opts) => run_healthcheck(opts),
|
||||||
Command::SendTestEmail(opts) => send_test_email_command(opts),
|
Command::SendTestEmail(opts) => send_test_email_command(opts),
|
||||||
|
Command::CreateSchema(opts) => create_schema_command(opts),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
Loading…
Reference in New Issue
Block a user