Recognizing written domain numeric utterances (e.g. I need 1.25.)canbechallengingforASRsystems,particularlywhennumericsequencesarenotseenduringtraining.Thisout−of−vocabulary(OOV)issueisaddressedinconventionalASRsystemsbytrainingpartofthemodelonspokendomainutterances(e.g.Ineedonedollarandtwentyfivecents.),forwhichnumericsequencesarecomposedofin−vocabularynumbers,andthenusinganFSTverbalizertodenormalizetheresult.Unfortunately,conventionalASRmodelsarenotsuitableforthelowmemorysettingofon−devicespeechrecognition.E2EmodelssuchasRNN−Tareattractiveforon−deviceASR,astheyfoldtheAM,PMandLMofaconventionalmodelintooneneuralnetwork.However,intheon−devicesettingthelargememoryfootprintofanFSTdenormermakesspokendomaintrainingmoredifficult.Inthispaper,weinvestigatetechniquestoimproveE2Emodelperformanceonnumericdata.Wefindthatusingatext−to−speechsystemtogenerateadditionalnumerictrainingdata,aswellasusingasmall−footprintneuralnetworktoperformspoken−to−writtendomaindenorming,yieldsimprovementinseveralnumericclasses.Inthecaseofthelongestnumericsequences,weseereductionofWERbyuptoafactorof8.