Planet MySQL

Planet MySQL -
  1. The Question Recently, a customer asked us: What is the meaning of this error message found in trepsvc.log? 2019/05/14 01:48:04.973 | | [east - binlog-to-q-0] INFO pipeline.SingleThreadStageTask Performing rollback of possible partial transaction: seqno=(unavailable) Simple Overview The Skinny This message is an indication that we are dropping any uncommitted or incomplete data read from the MySQL binary logs due to a pending error. The Answer Safety First This error is often seen before another error and is an indication that we are rolling back anything uncommitted, for safety. On a master this is normally very little and would likely be internal transactions in the trep_commit_seqno table, for example. As you may know with the replicator we always extract complete transactions, and so this particular message is specific to the reading of the MySQL binlog into the internal memory queue (binlog-to-q). This queue then feeds into the q-to-thl pipeline. We only want to write a complete transaction into the THL, so anything not completed when a failure like this happens gets rolled back. Summary The Wrap-Up In this blog post we discussed the Tungsten Replicator rollback error. To learn about Continuent solutions in general, check out The Library Please read the docs! For more information about monitoring Tungsten clusters, please visit Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business! For more information, please visit Want to learn more or run a POC? Contact us.
  2. This blog is an announcement for a Percona Server for MySQL 5.6.44-85.0-1   This is a CRITICAL update and the fix mitigates the issues described in CVE-2019-12301. If you upgraded packages on Debian/Ubuntu to 5.6.44-85.0-1, please upgrade to 5.6.44-85.0-2 or later and reset all MySQL root passwords.   Issue On 2019-05-18 Percona discovered an issue with the Debian/Ubuntu 5.6.44-85.0-1 packages for Percona Server for MySQL. When the previous versions, upgraded to the new version PS 5.6.44-85.0-1 on deb based systems, the MySQL root password was removed allowing users to login to the upgraded server as MySQL root without specifying a password.   Scope This issue is limited to users who upgraded with the Debian/Ubuntu package 5.6.44-85.0-1 for Percona Server for MySQL v. 5.6. Newer versions (v. 5.7 and above) and new installations of v. 5.6 (>= 5.6.44-85.0-2) are not affected by this issue.   The 5.6.44-85.0-1 packages were available for 19 hours, starting at 2019-05-17 and removed from the repository upon discovery of the issue. The 5.6.44-85.0-1 packages were replaced in the repository with the 5.6.44-85.0-2 packages on 2019-05-18 at 12:50 pm UTC (see bug #5640).   Although the fixed package no longer removes the MySQL root password,  it cannot restore the previously removed password.   If you downloaded the packages prior to 12:50 pm UTC on 2019-05-18, please update with a newer version and reset all MySQL root passwords on those servers. If you are not sure whether your version is affected, please verify with the script below.   Global Impact This impacted a small subset of users who are running a very specific version of Linux + specific version of MySQL. For those in this narrow footprint, please take the steps noted in the Remediation section.   Technical Impact All MySQL root user entries were reset and need to be reset to cure this issue, after applying the fixed 5.6.44 version of 5.6.44-85.0-2 or higher.   Remediation To cure the vulnerability reported in CVE-2019-12301, upgrade using the Debian/Ubuntu 5.6.44-85.0-2 package (or newer) and reset all MySQL root passwords. To determine if the residual MySQL root password reset issue was resolved, or to verify whether the root password is empty (which may be common), please deploy this script:   Contact If you have any questions about this issue or need to report a security concern, please contact Percona’s Security Team via or
  3. This java rest api tutorial help to create CRUD rest api using spring boot application.The spring boot is the next level of spring framework.Its based on java spring framework.We will create maven type project and generating JAR build. The Spring boot provides CLI to create and manage application.You can setup a spring boot project with almost zero configuration. In this java rest tutorial, we’ll build a Restful CRUD API for employee module.This module have API to get all employee, get a single employee records based on id and delete employee record by id. How To Create a New Spring Boot Project The spring boot provides a web tool to create initial level spring boot application.We will go to, Now enter the required details. Once all the details are entered, You need to click Generate Project button to generate and download your project.The downloaded project structure in zip file, unzip this folder and placed into your java_workplace or import into your favorite IDE. Following is the directory structure of our Employee spting boot application – Spring Boot Main Entry file Each java application is have the main entry point to execute application. We have com/example/restapiexample/ file for main entry java application. Create MySQL Table We are taking MySQL as a database, Let’s create a db into mysql, the db name is 'test' and creating employee table into this db. We will run below sql query to create employee table – CREATE TABLE IF NOT EXISTS `employee` ( `id` int(11) NOT NULL COMMENT 'primary key', `employee_name` varchar(255) NOT NULL COMMENT 'employee name', `employee_salary` double NOT NULL COMMENT 'employee salary', `employee_age` int(11) NOT NULL COMMENT 'employee age' ) ENGINE=InnoDB AUTO_INCREMENT=59 DEFAULT CHARSET=latin1 COMMENT='datatable demo table'; -- -- Dumping data for table `employee` -- INSERT INTO `employee` (`id`, `employee_name`, `employee_salary`, `employee_age`) VALUES (2, 'Garrett Winters', 170750, 63), (3, 'Ashton Cox', 86000, 66), (4, 'Cedric Kelly', 433060, 22); inserted some sample data to list all employee records. Create MySQL Configuration into properties file We will store MySQL configure parameters into properties file.Let’s create a file into resources/ folder.The spring-data-jpa is in the classpath by reading the database configuration from file. server.port=8080 db.driver: com.mysql.jdbc.Driver db.url: jdbc:mysql:// db.username: root db.password: # Hibernate hibernate.dialect: org.hibernate.dialect.MySQL5Dialect hibernate.show_sql: true update entitymanager.packagesToScan: com.example.restapiexample Create Config File We will create file into /restapiexample/configs folder.This file contains data source related information.We will add below code into this file – package com.example.restapiexample.configs; import java.util.Properties; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Autowired; import; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.env.Environment; import org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor; import org.springframework.jdbc.datasource.DriverManagerDataSource; import org.springframework.orm.jpa.JpaTransactionManager; import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean; import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter; import org.springframework.transaction.annotation.EnableTransactionManagement; /** * Contains database configurations. */ @Configuration @EnableTransactionManagement @ConfigurationProperties public class DbConfig { // ------------------------ // PUBLIC METHODS // ------------------------ /** * DataSource definition for database connection. Settings are read from * the file (using the env object). */ @Bean public DataSource dataSource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName(env.getProperty("db.driver")); dataSource.setUrl(env.getProperty("db.url")); dataSource.setUsername(env.getProperty("db.username")); dataSource.setPassword(env.getProperty("db.password")); return dataSource; } /** * Declare the JPA entity manager factory. */ @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory() { LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean(); entityManagerFactory.setDataSource(dataSource); // Classpath scanning of @Component, @Service, etc annotated class entityManagerFactory.setPackagesToScan( env.getProperty("entitymanager.packagesToScan")); // Vendor adapter HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); entityManagerFactory.setJpaVendorAdapter(vendorAdapter); // Hibernate properties Properties additionalProperties = new Properties(); additionalProperties.put( "hibernate.dialect", env.getProperty("hibernate.dialect")); additionalProperties.put( "hibernate.show_sql", env.getProperty("hibernate.show_sql")); additionalProperties.put( "", env.getProperty("")); entityManagerFactory.setJpaProperties(additionalProperties); return entityManagerFactory; } /** * Declare the transaction manager. */ @Bean public JpaTransactionManager transactionManager() { JpaTransactionManager transactionManager = new JpaTransactionManager(); transactionManager.setEntityManagerFactory( entityManagerFactory.getObject()); return transactionManager; } /** * PersistenceExceptionTranslationPostProcessor is a bean post processor * which adds an advisor to any bean annotated with Repository so that any * platform-specific exceptions are caught and then rethrown as one * Spring's unchecked data access exceptions (i.e. a subclass of * DataAccessException). */ @Bean public PersistenceExceptionTranslationPostProcessor exceptionTranslation() { return new PersistenceExceptionTranslationPostProcessor(); } // ------------------------ // PRIVATE FIELDS // ------------------------ @Autowired private Environment env; @Autowired private DataSource dataSource; @Autowired private LocalContainerEntityManagerFactoryBean entityManagerFactory; } // class DatabaseConfig We have imported mysql databse related information from properties file and use here. Create Model File in Spring Boot Application We will create employee model class into /models folder.Let’s create file into models folder. package com.example.restapiexample.models; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.validation.constraints.NotNull; @Entity @Table(name = "employee") public class Employee { // ------------------------ // PRIVATE FIELDS // ------------------------ @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; @NotNull private String employee_name; @NotNull private int employee_salary; @NotNull private int employee_age; public Employee() { } public Employee(int id) { = id; } public Employee(String employee_name, int employee_salary, int employee_age) { this.employee_name = employee_name; this.employee_salary = employee_salary; this.employee_age = employee_age; } public String getName() { return employee_name; } public int getSalary() { return employee_salary; } public int getAge() { return employee_age; } public void setAge(int age) { this.employee_age = age; } public void setName(String name) { this.employee_name = name; } public void setSalary(int salary) { this.employee_salary = salary; } public void setId(int value) { = value; } public int getId() { return id; } } Create DAO Layer Into Spring Boot Appplication Let’s create dao layer for spring employee model. We will create file into models/ folder and added below code into this file – package com.example.restapiexample.models; import java.util.List; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.transaction.Transactional; import org.springframework.stereotype.Repository; import com.example.restapiexample.models.Employee; @Repository @Transactional public class EmployeeDao { @PersistenceContext private EntityManager entityManager; /** * Return all the employee stored in the database. */ @SuppressWarnings("unchecked") public List<Employee> getAll() { return entityManager.createQuery("from Employee").getResultList(); } public Employee getById(int id) { return entityManager.find(Employee.class, id); } /** * Save the employee in the database. */ public Employee create(Employee emp) { entityManager.persist(emp); int id = emp.getId(); return entityManager.find(Employee.class, id); } /** * delete the employee in the database. */ public Boolean delete(Employee emp) { if (entityManager.contains(emp)) entityManager.remove(emp); else entityManager.remove(entityManager.merge(emp)); return true; } } Create Controller Class into Spring Boot We have created model class, DAO layer and configure JPA using dbconfig file, So now lets create controller file that will handle all rest end point and return response. package com.example.restapiexample.controllers; import com.example.restapiexample.models.Employee; import com.example.restapiexample.models.EmployeeDao; import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpHeaders; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseBody; @Controller public class EmployeeController { @Autowired EmployeeDao employeeDao; @RequestMapping("/getall") @ResponseBody public List<Employee> getAll() { return employeeDao.getAll(); } @RequestMapping(value = "/get/{id}", method = RequestMethod.GET, consumes="application/json") @ResponseBody public ResponseEntity<Employee> getById(@PathVariable("id") int id) { Employee emp = null; HttpHeaders headers = new HttpHeaders(); headers.add("Content-Type", "application/json"); headers.add("Responded", "EmployeeController"); try { emp = employeeDao.getById(id); } catch(Exception ex) { System.out.println("Employee not found" + ex.getMessage()); return ResponseEntity.accepted().headers(headers).body(emp); } return ResponseEntity.ok().headers(headers).body(emp); } @RequestMapping(value = "/create", method = RequestMethod.POST, consumes="application/json") @ResponseBody public ResponseEntity<Employee> create(@RequestBody Employee postdata) { Employee emp = null; HttpHeaders headers = new HttpHeaders(); headers.add("Content-Type", "application/json"); headers.add("Responded", "EmployeeController"); try { emp = employeeDao.create(postdata); } catch(Exception ex) { System.out.println("Employee not found" + ex.getMessage()); return ResponseEntity.ok().headers(headers).body(emp); } return ResponseEntity.ok().headers(headers).body(emp); } @RequestMapping(value = "/delete/{id}", method = RequestMethod.DELETE, consumes="application/json") @ResponseBody public String delete(@PathVariable("id") int id) { boolean isDeleted = false; HttpHeaders headers = new HttpHeaders(); headers.add("Content-Type", "application/json"); headers.add("Responded", "EmployeeController"); try { Employee emp = new Employee(id); isDeleted = employeeDao.delete(emp); } catch(Exception ex) { System.out.println("Employee not found to delete" + ex.getMessage()); return "Error deleting the Employee: " + ex.toString(); } if(isDeleted) { return "Employee succesfully deleted!"; } return "Error! Employee deleted!"; } } Created three calls, /getall is responsible to get all employee records, /create will use to post data and create new record, /get/{id} is use to get a single employee record and /delete/{id} rest end point use to delete record. Get All Records Using Postman Get A Single Record Using Postman Create A record Using Postman Delete A record Using Postman The post Create CRUD Rest API using Spring Boot and JPA appeared first on Rest Api Example.
  4. Some applications have the bad behaviour to CREATE or DROP tables while our FromDual Backup Manager (bman) backup is running. This leads to the following bman error message: /opt/mysql/product/5.7.26/bin/mysqldump --user=dba --host=migzm96i --port=3306 --all-databases --quick --single-transaction --flush-logs --triggers --routines --hex-blob --events | tee >(md5sum --binary >/tmp/checksum.23357.md5) | gzip -1 to Destination: /var/mysql/dumps/mysql96i/daily/bck_mysql96i_full_2019-05-22_06-50-01.sql.gz ERROR: /opt/mysql/product/5.7.26/bin/mysqldump command failed (rc=253). mysqldump: [Warning] Using a password on the command line interface can be insecure. Error: Couldn't read status information for table m_report_builder_cache_157_20190521035354 () mysqldump: Couldn't execute 'show create table `m_report_builder_cache_157_20190521035354`': Table 'totara.m_report_builder_cache_157_20190521035354' doesn't exist (1146) There are various strategies to work around this problem: If the table is only temporary create it with the CREATE command as a TEMPORARY TABLE instead of a normal table. This workaround would not work in this case because the table is a caching table which must be available for other connections as well. Try to schedule your application job or your bman job in the way they do not collide. With bman that is quite easy but sometimes not with the application. Try to create the table in its own schema (e.g. cache) which is excluded from bman backup. So you can easily do a bman backup without the cache schema. For example like this: $ bman --target=brman@ --type=schema --schema=-cache --policy=daily If this strategy also does not work (because you cannot change the application behaviour) try to ignore the table. The underlying command mysqldump knows the option --ignore-table: mysqldump --help ... --ignore-table=name Do not dump the specified table. To specify more than one table to ignore, use the directive multiple times, once for each table. Each table must be specified with both database and table names, e.g., --ignore-table=database.table. This option can be used in bman as well. Options to the underlying application are passed through FromDual Backup Manager as follows: $ bman --target=brman@ --type=full --policy=daily --pass-through='--ignore-table=totara.m_report_builder_cache_157_20190521035354' The problem here is, that this table contains a timestamp in its table name (20190521035354). So the table name is changing all the time. To pass through wildcards with --ignore-table is not possible with mysqldump. The tool mysqldump does not support (yet) this feature. The only solution we have in this case is, to ignore the error message with the risk that possible other error messages are also ignored. This is achieved again with the --pass-through option: $ bman --target=brman@ --type=full --policy=daily --pass-through='--force' I hope with this few tricks we can help you to make your FromDual Backup Manager (bman) backups hassle-free. Taxonomy upgrade extras:  Backup table bman drop error
  5. Database migrations don’t scale well. Typically you need to perform a great deal of tests before you can pull the trigger and switch from old to new. Migrations are usually done manually, as most of the process does not lend itself to automation. But that doesn’t mean there is no room for automation in the migration process. Imagine setting up a number of nodes with new software, provisioning them with data and configuring replication between old and new environments by hand. This takes days. Automation can be very useful when setting up a new environment and provisioning it with data. In this blog post, we will take a look at a very simple migration - from standalone Percona Server 5.7 to a 3-node Percona XtraDB Cluster 5.7. We will use Ansible to accomplish that. Environment Description First of all, one important disclaimer - what we are going to show here is only a draft of what you might like to run in production. It does work on our test environment but it may require modifications to make it suitable for your environment. In our tests we used four Ubuntu 16.04 VM’s deployed using Vagrant. One contains standalone Percona Server 5.7, remaining three will be used for Percona XtraDB Cluster nodes. We also use a separate node for running ansible playbooks, although this is not a requirement and the playbook can also be executed from one of the nodes. In addition, SSH connectivity is available between all of the nodes. You have to have connectivity from the host where you run ansible, but having the ability to ssh between nodes is useful (especially between master and new slave - we rely on this in the playbook). Playbook Structure Ansible playbooks typically share common structure - you create roles, which can be assigned to different hosts. Each role will contain tasks to be executed on it, templates that will be used, files that will be uploaded, variables which are defined for this particular playbook. In our case, the playbook is very simple. . ├── inventory ├── playbook.yml ├── roles │ ├── first_node │ │ ├── my.cnf.j2 │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ └── my.cnf.j2 │ ├── galera │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ └── my.cnf.j2 │ ├── master │ │ └── tasks │ │ └── main.yml │ └── slave │ └── tasks │ └── main.yml └── vars └── default.yml We defined a couple of roles - we have a master role, which is intended to do some sanity checks on the standalone node. There is slave node, which will be executed on one of the Galera nodes to configure it for replication, and set up the asynchronous replication. Then we have a role for all Galera nodes and a role for the first Galera node to bootstrap the cluster from it. For Galera roles, we have a couple of templates that we will use to create my.cnf files. We will also use local .my.cnf to define a username and password. We have a file containing a couple of variables which we may want to customize, just like passwords. Finally we have an inventory file, which defines hosts on which we will run the playbook, we also have the playbook file with information on how exactly things should be executed. Let’s take a look at the individual bits. Inventory File This is a very simple file. [galera] [first_node] [master] We have three groups, ‘galera’, which contains all Galera nodes, ‘first_node’, which we will use for the bootstrap and finally ‘master’, which contains our standalone Percona Server node. Playbook.yml The file playbook.yml contains the general guidelines on how the playbook should be executed. - hosts: master gather_facts: yes become: true pre_tasks: - name: Install Python2 raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) vars_files: - vars/default.yml roles: - { role: master } As you can see, we start with the standalone node and we apply tasks related to the role ‘master’ (we will discuss this in details further down in this post). - hosts: first_node gather_facts: yes become: true pre_tasks: - name: Install Python2 raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) vars_files: - vars/default.yml roles: - { role: first_node } - { role: slave } Second, we go to node defined in ‘first_node’ group and we apply two roles: ‘first_node’ and ‘slave’. The former is intended to deploy a single node PXC cluster, the later will configure it to work as a slave and set up the replication. - hosts: galera gather_facts: yes become: true pre_tasks: - name: Install Python2 raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) vars_files: - vars/default.yml roles: - { role: galera } Finally, we go through all Galera nodes and apply ‘galera’ role on all of them. Severalnines   DevOps Guide to Database Management Learn about what you need to know to automate and manage your open source databases Download for Free Variables Before we begin to look into roles, we want to mention default variables that we defined for this playbook. sst_user: "sstuser" sst_password: "pa55w0rd" root_password: "pass" repl_user: "repl_user" repl_password: "repl1cati0n" As we stated, this is a very simple playbook without much options for customization. You can configure users and passwords and this is basically it. One gotcha - please make sure that the standalone node’s root password matches ‘root_password’ here as otherwise the playbook wouldn’t be able to connect there (it can be extended to handle it but we did not cover that). This file is without much of a value but, as a rule of thumb, you want to encrypt any file which contains credentials. Obviously, this is for the security reasons. Ansible comes with ansible-vault, which can be used to encrypt and decrypt files. We will not cover details here, all you need to know is available in the documentation. In short, you can easily encrypt files using passwords and configure your environment so that the playbooks can be decrypted automatically using password from file or passed by hand. Roles In this section we will go over roles that are defined in the playbook, summarizing what they are intended to perform. Master role As we stated, this role is intended to run a sanity check on the configuration of the standalone MySQL. It will install required packages like percona-xtrabackup-24. It also creates replication user on the master node. A configuration is reviewed to ensure that the server_id and other replication and binary log-related settings are set. GTID is also enabled as we will rely on it for replication. First_node role Here, the first Galera node is installed. Percona repository will be configured, my.cnf will be created from the template. PXC will be installed. We also run some cleanup to remove unneeded users and to create those, which will be required (root user with the password of our choosing, user required for SST). Finally, cluster is bootstrapped using this node. We rely on the empty ‘wsrep_cluster_address’ as a way to initialize the cluster. This is why later we still execute ‘galera’ role on the first node - to swap initial my.cnf with the final one, containing ‘wsrep_cluster_address’ with all the members of the cluster. One thing worth remembering - when you create a root user with password you have to be careful not to get locked off MySQL so that Ansible could execute other steps of the playbook. One way to do that is to provide .my.cnf with correct user and password. Another would be to remember to always set correct login_user and login_password in ‘mysql_user’ module. Slave role This role is all about configuring replication between standalone node and the single node PXC cluster. We use xtrabackup to get the data, we also check for executed gtid in xtrabackup_binlog_info to ensure the backup will be restored properly and that replication can be configured. We also perform a bit of the configuration, making sure that the slave node can use GTID replication. There is a couple of gotchas here - it is not possible to run ‘RESET MASTER’ using ‘mysql_replication’ module as of Ansible 2.7.10, it should be possible to do that in 2.8, whenever it will come out. We had to use ‘shell’ module to run MySQL CLI commands. When rebuilding Galera node from external source, you have to remember to re-create any required users (at least user used for SST). Otherwise the remaining nodes will not be able to join the cluster. Galera role Related resources  ClusterControl for MySQL Galera Cluster  How to Automate Daily DevOps Database Tasks with Chef  Infrastructure Automation - Deploying ClusterControl and MySQL-based systems on AWS using Ansible Finally, this is the role in which we install PXC on remaining two nodes. We run it on all nodes, the initial one will get “production” my.cnf instead of its “bootstrap” version. Remaining two nodes will have PXC installed and they will get SST from the first node in the cluster. Summary As you can see, you can easily create a simple, reusable Ansible playbook which can be used for deploying Percona XtraDB Cluster and configuring it to be a slave of standalone MySQL node. To be honest, for migrating a single server, this will probably have no point as doing the same manually will be faster. Still, if you expect you will have to re-execute this process a couple of times, it will definitely make sense to automate it and make it more time efficient. As we stated at the beginning, this is by no means production-ready playbook. It is more of a proof of concept, something you may extend to make it suitable for your environment. You can find archive with the playbook here: We hope you found this blog post interesting and valuable, do not hesitate to share your thoughts. Tags:  MySQL galera cluster automation migration galera ansible