How to find out IP MYSQL SERVER

Codeby web-security - a new course from Codeby Security School

We are introducing the new course from the team of The Codeby - «testing of web applications penetration from the ground up." General theory, preparation of the working environment, passive fuzzing and fingerprint, Active fuzzing, Vulnerabilities, post-operation, Tools, Social Engeneering and more. More ...

Working with the MySQL database in C #

Consider a simple task. We have a site management, which is carried out using cms WordPress, and all data stored in the database data the MySQL . We need to create a client application that connects to the database to receive all comments for today and display them in a DataGridView.


Create a Windows Forms application, and put two controls on the form: a button that says get comments and datagridview, which will visually display the data.


To work with a database, we need a data provider (data provider), it provides a connection to the database, allows you to perform commands and getting results. In fact it is a regular file (.dll), inside of which contain types that are configured to interact with any one specific DBMS: MySQL, Oracle, Microsoft SQL Server, and so on.

data provider

In Microsoft ADO.NET data providers principal amount contained in the assembly Sytem.Data.dll, but in this build there a data provider to work with MySQL. Therefore, it will have to download and install it yourself. How to do this can be found here.

Just to connect to the database need to know the ip address of the server, this information, you can check with your host.

And the last thing you need to know is the name of the table, which stores all comments. This can be done in different ways, for example, go to the official WordPress site and find the page & # 171; Database Description & # 187 ;, which gives a full description of the database architecture.

bd WordPress structure

Or, for example, use the utility phpMyAdmin manually to search for the desired table, dwell on it.


So we open the phpMyAdmin page, on the left we see the list of databases.

list of databases

Choose from the list the database of our site, such as mytest and click on its title. On the next page we see a list of all available tables, which include wp_comments table ( the wp  is the table prefix, you it may be different), as the name itself will be the same.


Let's see what is contained in the table. To do this, click on the icon with the name Browse.


In the window that opens, we see part of the table, as well as some previously entered in it the data that we just need to get.

Paranoid - of course Codeby Security School

We present our course from the team codeby - «Complex on personal data protection measures, the anonymity of the Internet, and not only" Read more ...

with commentary table

All fields are of no interest, so take only a couple, for example, obtain the values: author (comment_author), date (comment_date) and the comment text itself (comment_content).

We turn to writing code.


First, create a method GetComments, which will return an object DataTable. Since the default method is created by the access modifier private, it can not be explicit.

DataTable GetComments(){}

Then configure a database connection

1. Create and populate MySqlConnectionStringBuilder object that will hold the following values: Server name, where is the database user name and password to connect to the database, as well as the database name.

Or, you can create a configuration file and all the data connection to make it, a detailed example will be given in the next article.

MySqlConnectionStringBuilder mysqlCSB;mysqlCSB = new MySqlConnectionStringBuilder();mysqlCSB.Server = "ip адрес сервера";mysqlCSB.Database = "имя БД";mysqlCSB.UserID = "имя пользователя";mysqlCSB.Password = "пароль";

2. Create a query string in it, we select all comments for today.

string queryString = @"SELECT comment_author,comment_date,comment_content FROM   wp_comments WHERE  comment_date >= CURDATE()";

3. Create a DataTable object, which will return and take our method datagridView.

DataTable dt = new DataTable();

4. Create a connection object using MySqlConnection class.

using(MySqlConnection con = new MySqlConnection()){}

4.1 Configure the newly created object, passing in the ConnectionString property of our object of type MySqlConnectionStringBuilder created earlier.

con.ConnectionString = mysqlCSB.ConnectionString;

5. Open the database connection

con.Open ();

6. Create a command object to the constructor pass a query string and connection object

MySqlCommand com = new MySqlCommand(queryString, con);

7. Follow the ExecuteReader method, which enables to obtain read data object MySqlDataReader

using (MySqlDataReader dr = com.ExecuteReader ()) {// there are records if (dr.HasRows) {// fill DataTabledt.Load object (dr);?}}

full listing

using MySql.Data.MySqlClient; //Добавитьprivate DataTable GetComments(){DataTable dt = new DataTable();MySqlConnectionStringBuilder mysqlCSB;mysqlCSB = new MySqlConnectionStringBuilder();mysqlCSB.Server = "";mysqlCSB.Database = "mytest";mysqlCSB.UserID = "root";mysqlCSB.Password = "123";string queryString = @"SELECT comment_author,comment_date,comment_contentFROM   wp_commentsWHERE  comment_date >= CURDATE()";using (MySqlConnection con = new MySqlConnection()){con.ConnectionString = mysqlCSB.ConnectionString;MySqlCommand com = new MySqlCommand(queryString, con);try{con.Open();using(MySqlDataReader dr = com.ExecuteReader()){if (dr.HasRows){ dt.Load(dr);}}}catch (Exception ex){MessageBox.Show(ex.Message);}}return dt;}

It remains to put the data in the DataGridView.

private void button1_Click(object sender, EventArgs e){dataGridView1.DataSource = GetComments();}

That's all, it remains to test our application. Push the button to get comments, and see the result.



See also:

  • c # Changing the width of a column in an Excel file
  • How to upload data from Mysql database in XML?
  • As clicking a button in the window webBrowser?

Secure transaction with a guarantee Sodeby

Garant is trusted intermediary between the participants during the transaction. The service of the site "Conducting transactions through the Guarantor" available to all registered users More ...


• «Networks and Business» • №1 (68) 2013 •

Igor Kirillov

The global server market in 2012, moving in different directions and showed outstanding results, but many analysts tend to view it as a preparation for a tangible leap that will lead the market to a new level thanks to upgraded technologies.

If we take the global server market as a whole, 2012 can not be called a particularly good one. International analysts say the agency is small, no more than 3-5% increase in the number of systems sold, which, however, overshadowed by the drop in revenue of about the same amount.

Thus, according to various estimates, the entire global segment sold approximately 8.7 million. Servers about $ 50 billion last year. This suggests that the average server price declines, and a revival of 2010-2011, perceived as the beginning of the revival market, stalled since it was achieved mainly due to pent-up demand and the next cycle of technological upgrading of enterprises.

But individual companies, the situation is different. If, for example, the HP , the IBM and Oracle are several weakened their position, the Dell , the Cisco , as well as a number of Japanese manufacturers stepped up presence. In addition, a significant impact on the market have large companies with data centers, such as Google, which independently produces servers for their needs.

It is interesting to note that in the first quarter of 2012, Cisco first appeared in the top five global manufacturers of servers, briefly ousted from the bottom line of the Fujitsu . During this period, the company has achieved fantastic growth of sales in the segment - 70.9% in terms of quantity and 72.4% in value (compared to the same quarter of 2011). However, in the future Fujitsu was able to restore the status quo, and Cisco share for the full year amounted to less than three percent of the global server market in terms of money. At the same time in the segment of blade servers the company already occupies 15-16%, which is a remarkable indicator, given the fact that this line of Cisco develops in the spring of 2009.

From convergent systems to "drowning" servers

In 2012, the trend will be continued development of integrated solutions for the converged computing infrastructure. For example, Hitachi the Data Systems' expanded its family of computing platforms Unified Compute Platform, presenting the market eleven new models. The first generation of UCP appeared in 2010, but the great popularity in the world did not use. Therefore, the developers decided to upgrade the main components of subsystems - servers, switches, storage systems, software, to eliminate the drawbacks of the previous generation of UCP. In particular, the world saw a new model of the blade server. The HDS hope that through improvements converged platform will win their place in the market, which is already actively working the Cisco , EMC's , the IBM , the HP , the NetApp and others.

Another update to its integrated computing solutions - Exadata X3 Database In-Memory Machine - showed in 2012, the company the Oracle . From the previous generation system characterized by the increased volume of SSD-drives, updated Exadata Smart Flash Cache, 8-core Intel Xeon E52600 Series processors, a large number of interfaces, 10 GbE, and new sales format (you can now buy a 1/8 rack fully configured).

Complex solutions based on open source technologies (such as x86 processors) are increasingly intruding into areas previously wholly owned by "heavy" and "closed" systems. Became indicative, for example, the fact that in February 2012, NASA stopped its last mainframe - IBM z9. Now the agency is fully passed on compute clusters.

Noticeable trend of the previous year in the area of servers and storage was the continuing race for energy efficiency and placement density. In this context, Dell introduced to the market in blade storage format (about it in detail below - in a separate section), as well as, for the first time in the world, Servers- "blades" in a quarter of the height of the connector ( Figure 1. ).

Fig.  1. Blade Server Dell PowerEdge M420 contains two 8-core processor, but it takes only a quarter of the chassis connector

Fig. 1. Blade Server Dell PowerEdge M420 contains two 8-core processor, but it takes only a quarter of the chassis connector

Now a standard chassis M1000e 10U tall can put the server 32, each of which contains up to two 8-core processors Intel Xeon or cores 512 (1024 flux) on the overall system. These developments in 2012 brought Dell blade platform in the first place for density computing resource allocation (in the segment of mass x86-based solutions). Until recently, competition on this indicator can be 2-processor blade servers HP ProLiant BL220c G7, which also housed up to 32 units in a single 10U chassis, but they are no longer produced by the manufacturer. Alternative "superdense» Dell server can be except that the AMD SeaMicro SM15000 system that allows you to put in a common housing height 10U 64 single-processor "blade" on the basis of 8-core chips Operon or 4-core Xeon ( Fig. 2 ).

Read more:   BEST DRIFT SAMP server for 0.3.7

Fig.  2. The new server AMD SeaMicro SM15000 displays the index compute density to new levels

Fig. 2. The new server AMD SeaMicro SM15000 displays the index compute density to new levels

 A feature of the solution was, including the use of a special backplane Freedom Supercompute Fabric, which appeared in the arsenal of AMD after the acquisition of SeaMicro. Switching matrix FSF possesses a total bandwidth to 1.28 Tbit / s and, more importantly, not only allows to connect servers inside the chassis but also external storage volume to 5 PB.

It is no secret that most of the electricity consumed by the data center, we have no means of IT load, and engineering systems and communications, in particular - cooling. Best engineers constantly attempts to produce the most efficient heat removal mechanisms. One interesting approach that has gained real shape in 2012, is cooling ... by "drowning" when servers are immersed in a special dielectric fluid. Using this approach, even for the first time talking about five years ago, but then began to appear the first experimental sample solutions, which, however, did not become very popular on a global scale.

But in the past year, several major companies announced their support for the concept. For example, Facebook offers servers immersed in a liquid similar in composition and consistency of mineral oil. technology tests in practice already carried out. Intel tested all such heat removal system for a year, utilizing the resources of its own data center in New Mexico.

But immersion in the "oil" - is not the only possible approach. The company 3M has developed a solution in which the liquid boils off server components, placed in a special bath and then condensed with a special circuit for reuse. As a refrigerant used Novec fluid having dielectric properties and low boiling point. Some manufacturers offer solutions for spot cooling liquid not only the server, but only the hottest component - the CPU, RAM, parallel computation accelerators, etc. Such developments include, in particular, Asetek , Iceotope and a number of other companies.

ARM servers

In 2012, the trend of use in servers processors with ultra low power consumption. Following of Hewlett-Packard , which is presented at the end of 2011 Redstone platform ( Fig. 3 ) on RISC-based on ARM , based on the development of the micro shown Penguin Computing , the Dell and others.

Fig.  3. The system Hewlett-Packard Redstone uses ARM processors.  In the photo: a module for placement computing boards (a) and chassis for the installation of four such blocks (B)

Fig. 3. The system Hewlett-Packard Redstone uses ARM processors. In the photo: a module for placement computing boards (a) and chassis for the installation of four such blocks (B)

 And if HP is planning in the future to move to new Intel Atom and future AMD chips (which are expected to be also built on the ARM architecture), then, for example, it plans to introduce the Dell servers are equipped with both x86 and ARM processors. To this end, the company is developing a universal system management infrastructure that supports both the CISC processors, and RISC. Note that the first time a universal connector that allows you to set both processors on x86-based and ARM, was presented last year at the conference Open Compute Summit, organized by Facebook. Dell jumped at the idea and plans to offer commercial solutions with universal connector in the short term. Since last year, the company offers servers based on 64-bit ARM processors for testing its customers. Bringing to market a commercial version is scheduled for at least the 2013th.

Storage: "hybrid" mood

Also in the context of this short review look at some interesting events and trends in 2012 on corporate storage market. One clear trend is the further penetration in the SSD segment. All major and many minor producers presented their strategies, approaches and developments in this area. For example, in November, Intel introduced a new generation of SSD with about 60MB SATA interface and read / write speeds / s. Thus, compared to the previous generation of the reading speed has increased virtually doubled, and record - fifteen times. Disks are designed, first and foremost, to improve performance of multi-core computing. The new drives are in operation consume up to 6 watts (and no more than 650 mW in standby mode). Maximum storage capacity - 800 GB at a price of about $ 2 thousand -. Ie these solutions are still expensive.

In the area of traditional hard drives the company Western Digital has introduced a new technology that is inside the drive air is replaced by helium, thus reducing the space between the magnetic plates and increase the information capacity of the device. Also, a significant breakthrough was observed in data storage technology on a magnetic tape. Many have already started to dismiss this type of storage, but a consortium of developers LTO pleased users a new standard - LTO 6 cartridge has a capacity of up to 6.25 TB and data transfer rates up to 400 MB / s (in both cases we are talking about the compressed data).

Read more:   THE GOOGLE TEST download speed

Last year, the market hardware storage company entered the Symantec , which presented its own backup device NetBackup series development ( Fig. 4 ).

Fig.  4. Integrated storage and protection of Symantec NetBackup 5220 data

Fig. 4. Integrated storage and protection of Symantec NetBackup 5220 data

 Also interesting is the tendency to compact placement of hard drives. In pursuit of the efficient use of available space in the rack manufacturers offer interesting engineering solutions. So, Dell company in the middle of this year introduced a disk array EqualLogic PS-M4110 Blade Array in blade format. It can be installed in a standard model M1000e 10U chassis parallel with proprietary servers and switches PowerEdge Force10 or Power Connect. The maximum capacity of the array is 14 SAS hard disk drives (up to 1 TB each). In one bleydshassi it may be set up to two modules ( Fig. 5 ).

Fig.  5. The disk array Dell EqualLogic PS-M4110 Blade Array: separately (a) and as part of a standard 10U blade chassis PowerEdge M1000e

Fig. 5. The disk array Dell EqualLogic PS-M4110 Blade Array: separately (a) and as part of a standard 10U blade chassis PowerEdge M1000e

Special modification can be used along with SAS drives also SDDnakopiteli. One module occupies two half-height bay. Thus, by setting two PS-M4110 can receive the data store in a single chassis, up to 28 TB, occupying only two full-sized compartment.

In addition, in 2012, it continued to develop the tendency to unite in one system of magnetic and SSD «hard drives." A few years ago, such an approach was typical only for high-end storage, now a number of manufacturers offer hybrid system for mid-level solutions.

The general trend, which is characteristic for the entire market, is a desire to unite and unify heterogeneous platforms. Hence the popularity of converged computing solutions, the development of universal server connectors, and hybrid storage. In this and subsequent years these areas will be developed, strengthened and updated with new supporters.

If you find a mistake in the text, then select it with the mouse and press Ctrl + Enter or click here .

Many thanks for your help! We will soon correct the mistake!

Message has not been sent. Please try again.

Error message


Your comments (optional):

Yes Cancel

  • Facebook
  • Twitter
  • Google+


From a simple relational database management system SQL Server has evolved into a multi-purpose platform enterprise-level data. .

TCP 1433

TCP 1433 - the port selected for the SQL Server default. This is the official IANA socket number (Agency for the allocation of names and unique settings of Internet protocols) for SQL Server. Client systems use TCP port 1433 to connect to a database management system; among the SQL Server Management Studio (SSMS) port is used to manage SQL Server instances across the network. You can configure SQL Server to listen to another port, but in most cases, port 1433 is used.

TCP 1434

TCP 1434 - the port selected by default for Dedicated Administrator Connection. You can run a dedicated administrator connection using the command line or by typing sqlcmd «ADMIN:» followed by the name server in SSMS Connect to Database Engine dialog box.

UDP 1434

UDP port 1434 is used for named instances of SQL Server. SQL Server Browser service listens on this port to detect incoming connections to a named instance. The service then sends the client TCP-port number for the requested instance name.

TCP 2383

TCP 2383 - the port is the default for SQL Server Analysis Services service.

TCP 2382

TCP-port 2382 is used to connect to a named instance queries Analysis Services. As in the case of a relational database, and UDP port 1434, SQL Server Browser service listens on TCP port 2382 by detecting requests for named instances of Analysis Services. Analysis Services is then redirect the request to the appropriate port for the named instance.

TCP 135

In TCP-port 135, a number of uses. It is used by the debugger Transact-SQL, and is used to start, stop and manage services SQL Server Integration Services, although the need for it arises only when you connect to a remote instance of the service from SSMS.

TCP 80 and 443

TCP-ports 80 and 443 are most commonly used to access the report server. However, they support and URL-requests to SQL Server and Analysis Services. TCP 80 - the default port for HTTP-connections using URL. TCP 443 serves HTTPS-connection via SSL.

Informal TCP-ports

Microsoft is using TCP-port 4022 for instances of SQL Server Service Broker in SQL Server Books Online. Similarly, copies of BOL Database Mirroring involve TCP-port 7022.

This list includes the most necessary ports. For more information about TCP and UDP ports used by SQL Server, see the article Microsoft «Configure the Windows Firewall to Allow SQL Server Access» ( .aspx).

Choose language

PolishEnglish Deutsch Spanish French Italian Portuguese Turkish Arab Ukrainian Swedish Hungarian Bulgarian Estonian Chinese (Simplified) Vietnamese Romanian Thai Slovenian Slovak Serbian Malay Norwegian Latvian Lithuanian Korean Japanese Indonesian Hindi Hebrew Finnish Greek Dutch Czech Danish Croatian Chinese (Traditional) Philippine Urdu Azeybardzhansky Armenian Belorussian Bengal Georgian Kazakh Catalan Mongolski Russian Tadzhitsky Tamilʹskij telugu Uzbetsky

Add a comment

Your e-mail will not be published. Required fields are marked *