c#通过datatable插入大量数据,50万数据只需要3秒
数据库sql脚本:
CREATE DATABASE TESTDB GO USE TESTDB GO CREATE TABLE TAB1 ( NAME NVARCHAR(10), AGE NVARCHAR(10) , ADRESS NVARCHAR(10) ) delete TAB1 SELECT * FROM TAB1 SELECT COUNT(0) FROM TAB1
class Program { static void Main(string[] args) { string conn = "Data Source=.;Initial Catalog=TESTDB;user id=sa;password=123456"; DataTable dt = new DataTable(); dt.Columns.Add("Name1"); dt.Columns.Add("Age"); dt.Columns.Add("Adress"); for (int i = 0; i < 500000; i++) { dt.Rows.Add("Name" + i, i, "地址" + i); } DataTableToSQLServer(dt, conn, "TAB1"); } public static void DataTableToSQLServer(DataTable dt, string connectionString, string tableName) { System.Diagnostics.Stopwatch stopwatch = new System.Diagnostics.Stopwatch(); stopwatch.Start(); using (SqlConnection destinationConnection = new SqlConnection(connectionString)) { destinationConnection.Open(); using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection)) { try { bulkCopy.DestinationTableName = tableName;//要插入的表的表名 bulkCopy.BatchSize = dt.Rows.Count; bulkCopy.ColumnMappings.Add("Name1", "NAME");//映射字段名 DataTable列名 ,数据库 对应的列名 bulkCopy.ColumnMappings.Add("Age", "AGE"); bulkCopy.ColumnMappings.Add("Adress", "ADRESS"); bulkCopy.WriteToServer(dt); Console.WriteLine("插入成功!"); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { if (destinationConnection.State == ConnectionState.Open) { destinationConnection.Close(); } stopwatch.Stop(); Console.WriteLine("插入:" + dt.Rows.Count + "数据,耗时" + (stopwatch.ElapsedMilliseconds / 1000).ToString() + "秒"); } } } } }
相关内容
大量数据快速插入方法探究[nologging+parallel+append]
大量数据快速插入方法探究
快速插入千万级别的数据,无非就是nologging+parallel+append。
1 环境搭建
构建一个千万级别的源表,向一个空表insert操作。
参考指标:insert动作完成的实际时间。
SQL> drop table test_emp cascadeconstraints purge; Table dropped. SQL> create table test_emp as select *from emp; Table created. SQL> begin 2 for i in 1..10 loop 3 insert into test_emp select *from test_emp; --批量dml,建议forall 4 end loop; 5 end; 6 / PL/SQL procedure successfully completed. SQL> select count(*) from test_emp; COUNT(*) ---------- 14336 SQL> begin 2 for i in 1..10 loop 3 insert into test_emp select *from test_emp; 4 end loop 5 ; 6 end; 7 / PL/SQL procedure successfully completed. SQL> select count(*) from test_emp; COUNT(*) ---------- 14680064 --1.5千万级别
2 only append
SQL> set timing on SQL> show timing timing ON SQL> insert /*+ append */ into test_goalselect * from test_emp; 14680064 rows created.
Elapsed: 00:00:20.72
没有关闭日志,所以时间是最长的。
3 append+nologging
SQL> truncate table test_goal; Table truncated. Elapsed: 00:00:00.11 SQL> insert /*+ append */ into test_goalselect * from test_emp nologging; 14680064 rows created.
Elapsed: 00:00:04.82
发现日志对插入的影响很大,加nologging时间明显大幅缩短;当然这个表没有索引、约束等,这里暂不考究。
4 append+nologging+parallel
SQL> truncate table test_goal; Table truncated. Elapsed: 00:00:00.09 SQL> insert /*+ parallel(2) append */into test_goal select * from test_emp nologging; 14680064 rows created.
Elapsed: 00:00:02.86
这里在3的基础上加上并行,性能基本达到极限,1.5千万数据插入时间控制在3S左右。并行在服务器性能支持的情况下,可以加大并行参数。
本文出自 “90SirDB” 博客,请务必保留此出处http://90sirdb.blog.51cto.com/8713279/1794367