How to pub­lish a best-sell­ing book

Se­bas­tian Raschka on up­grad­ing your ex­per­tise and bring­ing it to the com­mu­nity

net magazine - - VOICES - The sec­ond edi­tion of Se­bas­tian Raschka’s best­selling book Python Ma­chine Learn­ing is avail­able to pre-or­der through Packt Pub­lish­ing now.

If you’re bit of a ma­chine learn­ing geek, you might have come across Se­bas­tian Raschka on the Twit­ter­verse. If you haven’t, you’re about to. Raschka is a re­searcher at Michi­gan State Uni­ver­sity, an ex­pert in ap­plied ma­chine learn­ing and deep learn­ing, and au­thor of Packt’s best-sell­ing book of all time,

Python Ma­chine Learn­ing. With the up­com­ing launch of the book’s sec­ond edi­tion, Raschka is here to give you some tips on be­com­ing an ex­pert in your field and get­ting your work recog­nised by a wider au­di­ence.

LEARN AND PRAC­TICE

The first and most im­por­tant thing is to learn as much as you can and to re­ally throw your­self into your topic. Prob­a­bly the most in­flu­en­tial fac­tor in me start­ing my ca­reer was a class I took on sta­tis­ti­cal pat­tern clas­si­fi­ca­tion when I started my PhD. I was al­ready a pas­sion­ate Python pro­gram­mer and I wasn’t happy about

sub­mit­ting my home­work us­ing MATLAB. Be­cause of this, I spent a lot of time reim­ple­ment­ing al­go­rithms from pa­pers and text­books in Python us­ing NumPy and SciPy. This might seem like it was a te­dious ex­er­cise (you’d be right), but study­ing the con­cepts of sta­tis­ti­cal pat­tern clas­si­fi­ca­tion more in­tensely helped me a great deal with trans­lat­ing ideas from the­ory to code.

If you’re start­ing out, I rec­om­mend start­ing with a prac­ti­cal, in­tro­duc­tory book or course to get a brief over­view of your field and the dif­fer­ent tech­niques that ex­ist. For some­one in­ter­ested in my field, for ex­am­ple, a con­crete place to start would be un­der­stand­ing the big pic­ture and what data science and ma­chine learn­ing is ca­pa­ble of. I’d then rec­om­mend start­ing a project that you’re pas­sion­ate about whilst ap­ply­ing your newly learned tech­niques to help you ad­dress and an­swer com­plex ques­tions that might arise dur­ing your project. When you’re work­ing on an ex­cit­ing project, I think you’re more likely to be nat­u­rally mo­ti­vated to read ad­vanced ma­te­rial and im­prove your skills.

Join com­mu­ni­ties

Around the time I en­rolled in the sta­tis­ti­cal pat­tern clas­si­fi­ca­tion class, I de­vel­oped the strong urge to talk about all the things I was learn­ing and dis­cov­er­ing. I found it very use­ful in terms of my own un­der­stand­ing to dis­cuss what I’d dis­cov­ered and what tech­niques I found use­ful and ex­cit­ing.

One thing led to an­other and I started a blog. I then be­came more and more ac­tive in the open-source com­mu­nity, as well as avidly par­tic­i­pat­ing in dis­cus­sions on so­cial me­dia. This helped me meet new peo­ple and air my ideas, and even­tu­ally I was con­tacted by Packt about an op­por­tu­nity to ded­i­cate a whole book to two of the things I’m most pas­sion­ate about: ma­chine learn­ing and open source tools. The rest, they say, is his­tory.

Show your ex­per­tise

I cover a lot of dif­fer­ent sub­fields of ma­chine learn­ing in my book: clas­si­fi­ca­tion, re­gres­sion anal­y­sis and so forth – with the in­ten­tion of show­ing that ma­chine learn­ing can be use­ful in al­most ev­ery prob­lem do­main. I think that know­ing as much as you can about your sub­ject area is cru­cial in reach­ing a wide au­di­ence and be­ing able to an­swer ques­tions posed to you.

Demon­strat­ing well-de­vel­oped and main­tained open-source soft­ware in my ex­am­ples makes ma­chine learn­ing more ac­ces­si­ble to a broad au­di­ence, in­clud­ing ex­pe­ri­enced pro­gram­mers and peo­ple who are new to pro­gram­ming – and re­mem­ber the ba­sics! By in­tro­duc­ing the ba­sic math­e­mat­ics be­hind ma­chine learn­ing, I aim to ed­u­cate my au­di­ence and show that ma­chine learn­ing is more than just black box al­go­rithms, giv­ing read­ers an idea of the ca­pa­bil­i­ties but also lim­i­ta­tions of ma­chine learn­ing and how to ap­ply those al­go­rithms wisely.

Lis­ten to your au­di­ence

In the sec­ond edi­tion of my book, we im­proved or added many things based on feed­back, which is some­thing that I think all writ­ers should do – and not just when writ­ing a book. Amongst every­thing that we added is a sec­tion on deal­ing with im­bal­anced datasets, which sev­eral read­ers thought was miss­ing in the first edi­tion. As time moves on, so does the world of soft­ware. When the first edi­tion of Python Ma­chine Learn­ing was re­leased in 2015, we in­cluded an in­tro­duc­tion to deep learn­ing via Theano. Since then it’s got a sub­stan­tial over­flow and is now based on Ten­sorFlow, which has be­come a ma­jor player in my re­search tool­box since its open source re­lease by Google in 2015, so we added a new in­tro­duc­tion to deep learn­ing us­ing Ten­sorFlow.

I re­ally ap­pre­ci­ated all the help­ful feed­back from read­ers, and I rec­om­mend mak­ing sure to im­prove your work all the time based on feed­back. Your au­di­ence will ap­pre­ci­ate you go­ing back to re­word para­graphs where things might not be to­tally clear and add ad­di­tional ex­pla­na­tions where nec­es­sary.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.