how to handle multiple date formats in large csv files in tsload ?

I have a large CSV file that I'm trying to load through the tsload command line tool. I see that there's an option to specify the date format with the --date_format option. What if my file has multiple date formats in the file? for example, my file has a column RECEIVED_DT with dates like "15102011" and a column RECEIVED_PROCESS_DATE with dates like "2011-15-10". so I have a file with both %m%d%Y and %Y-%m-%d. What is the best way of handling this? I'd like to import them both as date types. Is there some recommended thoughtspot way of handling this?

42replies Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
  • tsload only supports a single date type for a given CSV file.  It's the same with datetime, boolean, and null as well.  

    There are two ways that people handle this.  1) change the extract method to have the same format for the multiple columns.  2) Use a tool such as sed, etc. to convert inside the file before load.

    Like 1
  • If you decide to go with sed as Bill mentioned above, the following command can help you convert your '%Y-%m-%d' dates to '%m%d%Y' format

    cat <file_name> | sed -e 's/\([0-9]\{4\}\)-\([0-9]\{2\}\)-\([0-9]\{2\}\)/\2\3\1/' | tsload
    

    this command will search each line in your file and look for the pattern ####-##-## and convert it to ######## where the first two numbers represent month, the second two represent day and the last 4 represent year.  

    Like 1
  • incoming data file is of format mm/dd/yy - using TSLOAD how I can convert to mm/dd/yyyy format. ? 

    Like
  • Lokesh Ceeba ThoughtSpot can automatically interpret mm/dd/yy using the following tsload flag

    --datetime_format %m/%d/%y

    This is standard strptime notation.

    %y refers to the year within the century.  68-99 will be interpreted as 1968-1999 and 00-69 will be interpreted as 2000-2069

    Like
  • why is this not working ?

    raw data file has dates: 1/28/2017; 1/14/2017 ; 1/7/2017; 1/14/2017

    tsload command has --date_time_format '%m/%d/%Y'

    target column in table is declared "DATETIME"

    But why TSLoad fails - it is so buggy :)

    Reason: Conversion failed, error=Can not convert "Period" to date using date format "%m/%d/%Y", data column=3, schema column=Period

    Like
  • Looks like there are a few issues going on here. 

    1. The error message we are getting says that we appear to be trying to load the value "Period" into a datetime column.  I am assuming "Period" is your column header.  You will want to add the flag "--has_header_row" to your tsload command so that ThoughtSpot knows to ignore the first row and instead start your load on the second row
    2. Your data does not have any times associated with it.  It will probably not load into a datetime column properly since datetimes expect %H:%M:%S values as well.  I would recommend converting your datetime column to a date column and using the following flag instead "--date_format '%m/%d/%Y'" (my previous date_time flag response had an error in it, the real flag would have been --date_time_format %m/%d/%Y %H:%M:%S)

    here is some supporting documentation:

    https://docs.thoughtspot.com/4.5/reference/data-importer-ref.html

    https://docs.thoughtspot.com/4.5/admin/loading/about-data-type-conversion.html#date-and-time-conversions

    https://www.systutorials.com/docs/linux/man/3-strptime/

    Like
  • Yes I know you would come back with that:

    CREATE TABLE "my_db"."falcon_default_schema"."my_table" (
      "Market_Display_Name" VARCHAR(255),
      "UPC" VARCHAR(255),
      "Period" DATE,
      "POS_USD" VARCHAR(255),
      "Units" VARCHAR(255),
      "ACV" DOUBLE,
     PRIMARY KEY ("Period" ,"UPC" ,"Market_Display_Name")
    )PARTITION BY HASH  (128) KEY ("UPC","Period","Market_Display_Name");

    --------

    tsload --target_database "my_db" --target_table "my_table" --date_format '%m/%d/%Y' --null_value "" --has_header_row --source_data_format csv --field_separator ','

    ---

    Raw Data File Sample:

    Market_Display_Name,UPC,Period,POS_USD,Units,ACV
    Testing Markets USA,506024506946,1/28/2017,4,1,0.0
    Testing Markets USA,506024506946,1/21/2017,2,1,0.0
    Testing Markets USA,506024506946,1/14/2017,4,1,0.0
    Testing Markets USA,506024506946,1/7/2017,8,2,0.0
    Testing Markets USA,506024506940,1/28/2017,4,2,0.0

    Like
  • Checkout the error still persists:

    ---------------

    Header row read successfully
    E0711 00:17:27.838273 46981 data_importer.cpp:934] Market_Display_Name,UPC,Period,POS_USD,Units,ACV
    , line no=87789 ;Reason: Conversion failed, error=Can not convert "Period" to date using date format "%m/%d/%Y", data column=3, schema column=Period
    E0711 00:17:27.838269 46861 data_importer.cpp:1828] Stopped processing after 172163 lines, Exceeded max allowed errors 0
    Source has 172162 data rows, has header row, ignored row count 125915
    E0711 00:17:27.850942 46861 data_importer.cpp:1909] Extract failed, exceeded max errors 0

    -----------------------

    Like
  • It looks like the line that tsload is complaining about is line number 87789.  Take a look at that row and see if you can spot any obvious errors. 

    Additionally, try adding the flag --max_ignored_rows 10.  This will allow you to look at up to 10 bad rows of data before the the load job rejects the entire file.

    Like
  • now I declare the date column to varchar() 

    Then alter the table:

    ALTER TABLE "wmt_hist_nielsen" modify column "Period" DATE;
    Update columns message = error code = ALTER_TABLE_FAILED, message = Poll: worker 2 error:
    Statement=ALTER TABLE "wmt_hist_nielsen" modify column "Period" DATE;
     returned error.
    --

    Like
  • Lokesh Ceeba 

    Commands used:

     ALTER TABLE "wmt_hist_nielsen" modify column "Period" DATE [parsinghint="%Y/%m/%d"];
    Update columns message = error code = ALTER_TABLE_FAILED, message = Poll: worker 2 error:
    Statement=ALTER TABLE "wmt_hist_nielsen" modify column "Period" DATE [parsinghint="%Y/%m/%d"];
     returned error.

    -------------------
    TQL [database=nielsen]> ALTER TABLE "wmt_hist_nielsen" modify column "Period" DATE [parsinghint="%m/%d/%Y"];
    Update columns message = error code = ALTER_TABLE_FAILED, message = Poll: worker 2 error:
    Statement=ALTER TABLE "wmt_hist_nielsen" modify column "Period" DATE [parsinghint="%m/%d/%Y"];
     returned error.
    ----------------------

    Like
  • were you able to successfully load data into the table once you converted the Period column to a varchar?

    Like
  • Yes- absolutely. But is still my data prep-temp-table. My final Target Fact table has to be loaded using the above temp-table.

    Like
  • If you were able to load data once you converted Period to a varchar, then could it be true that the word "Period" exists as a value in the Period Column?  If this is true then converting your column back to a date column would be impossible.  Would you agree?

    Like
  • I would recommend the following:

    1. truncate your table
    2. convert Period back to a date column
    3. review row 87789 to confirm it does not contain any bad data
    4. add "--max_ignored_rows 10" to your load script
    5. try loading again
    Like
  • Great ! Now I am able to overcome the date issue. It is the issue with fields declared as double - but the data has commas inside it like "1,263"; "12,487" (it is as csv file).

    --

    Walmart Total US TA,501287433006,1/21/2017,"2,921",593,14.0
    , line no=89 ;Reason: Conversion failed, error=Can not convert "2,921" to double, data column=4, schema column          =POS_USD;Extra characters after number ,921
    E0711 01:30:29.894520 25412 data_importer.cpp:934] Walmart Total US TA,501287433006,1/14/2017,"2,749",554,13.0
    , line no=90 ;Reason: Conversion failed, error=Can not convert "2,749" to double, data column=4, schema column          =POS_USD;Extra characters after number ,749
    --

    Like
  • Finally it got loaded but only after converting the money column to String and QTY column to string. But now how can we use this these columns as normal ? Is there a way to convert the String to INT or String to a Double ?

    Like
  • Hey Lokesh If your varchar column contains values with commas then you will not be able to convert to a double. commas are formatting characters that cannot be interpreted by the database as part of a measure. I would recommend cleaning the data instead to not include commas in your measures. I would also recommend that you request office hours to get more hands on help with understanding best practices around tsload. Here is a link to the office hours request form. https://thoughtspotcs-officehours.youcanbook.me/

    Like
  • I am unable to load this sample file. Can someone try it out. It has a Varbinary (datatype column) and contains multiple Date\Time formats.

    "Calendar_Smart_Key","Date_Type","Full_Date","Day_Num_Of_Week","Day_Num_Of_Month","Day_Num_Of_Quarter","Day_Num_Of_Year","Day_Of_Week_Name","Day_Of_Week_Abbreviation","Day_Of_Week_Abbreviation_Short","Part_Of_Week","Part_Of_Week_Long","Julian_Day_Num_Of_Year","Julian_Day_Num_Absolute","Is_Weekday","Is_Last_Day_Of_Week","Is_Last_Day_Of_Month","Is_Last_Day_Of_Quarter","Is_Last_Day_Of_Year","Is_Last_Day_Of_Fiscal_Month","Is_Last_Day_Of_Fiscal_Quarter","Is_Last_Day_Of_Fiscal_Year","Prev_Day_Date","Prev_Day_Calendar_Key","Same_Weekday_Year_Ago_Date","Same_Weekday_Year_Ago_Calendar_Key","Week_Of_Year_Begin_Date","Week_Of_Year_Begin_Calendar_Key","Week_Of_Year_End_Date","Week_Of_Year_End_Calendar_Key","Week_Of_Month_Begin_Date","Week_Of_Month_Begin_Calendar_Key","Week_Of_Month_End_Date","Week_Of_Month_End_Calendar_Key","Week_Of_Quarter_Begin_Date","Week_Of_Quarter_Begin_Calendar_Key","Week_Of_Quarter_End_Date","Week_Of_Quarter_End_Calendar_Key","Week_Num_Of_Month","Week_Num_Of_Quarter","Week_Num_Of_Year","Month_Num_Of_Year","Month_Name","Month_Name_Abbreviation","Month_Begin_Date","Month_Begin_Calendar_Key","Month_End_Date","Month_End_Calendar_Key","Quarter_Num_Of_Year","Quarter_Begin_Date","Quarter_Begin_Calendar_Key","Quarter_End_Date","Quarter_End_Calendar_Key","Year_Num","Year_Begin_Date","Year_Begin_Calendar_Key","Year_End_Date","Year_End_Calendar_Key","YYYYMM","YYYYMMDD","DDMONYY","DDMONYYYY","Qtr_Label","Mth_Label","Fiscal_Week_Num_Of_Year","Fiscal_Week_Begin_Calendar_Key","Fiscal_Week_Begin_Date","Fiscal_Week_End_Calendar_Key","Fiscal_Week_End_Date","Fiscal_Month_Num_Of_Year","Fiscal_Month_Begin_Date","Fiscal_Month_Begin_Calendar_Key","Fiscal_Month_End_Date","Fiscal_Month_End_Calendar_Key","Fiscal_Quarter_Num_Of_Year","Fiscal_Quarter_Begin_Date","Fiscal_Quarter_Begin_Calendar_Key","Fiscal_Quarter_End_Date","Fiscal_Quarter_End_Calendar_Key","Fiscal_Year_Num","Fiscal_Year_Begin_Date","Fiscal_Year_Begin_Calendar_Key","Fiscal_Year_End_Date","Fiscal_Year_End_Calendar_Key","Fiscal_Day_Num_Of_Year","Fiscal_Day_Num_Of_Quarter","Fiscal_Day_Num_Of_Month","Fiscal_Day_Num_Of_Week","Fiscal_Qtr_Label","Day_Num_Absolute","Relative_Days","Relative_Days_Order","Relative_Months","Relative_MTD","Relative_Quarter","Relative_QTD","Relative_Year","Relative_FY","Relative_YTD","Relative_FYTD","Inserted_Batch_Execution_Id","Updated_Batch_Execution_Id","Row_Effective_Date","Row_End_Date","Row_Is_Latest","Row_Is_Current","SCD_Type_1_Hash","SCD_Type_2_Hash"
    "20131223","Normal","2013-12-23 00:00:00","2","23","84","357","MONDAY","MON","M ","Early","Weekday","2013357","2013041630","Y","N","N","N","N","N","N","N","","","","","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-17 00:00:00","20131217","2013-12-23 00:00:00","20131223","4","12","52","12","DECEMBER","Dec","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","4","2013-10-01 00:00:00","","2013-12-31 00:00:00","","2013","2013-01-01 00:00:00","20130101","2013-12-31 00:00:00","20131231","201312","20131223","23 Dec 13","23 Dec 2013","Qtr 4, 2013","Dec 2013","26","20131223","2013-12-23 00:00:00","20131229","2013-12-29 00:00:00","6","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","2","2013-10-01 00:00:00","20131001","2013-12-31 00:00:00","20131231","2014","2013-07-01 00:00:00","20130701","2014-06-30 00:00:00","20140630","176","84","23","1","Qtr 2, 2014","41630","Other","10","Other","Other","Other","Other","Other","Other","Other","Other","1","0","0001-01-01 00:00:00.0000000","9999-12-31 23:59:59.9970000","True","True","0E0C6792DE7A5E71695E5DECCD5FCE8221A91518",""
    "20131224","Normal","2013-12-24 00:00:00","3","24","85","358","TUESDAY","TUE","Tu","Early","Weekday","2013358","2013041631","Y","N","N","N","N","N","N","N","","","","","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-24 00:00:00","20131224","2013-12-30 00:00:00","20131230","4","13","52","12","DECEMBER","Dec","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","4","2013-10-01 00:00:00","","2013-12-31 00:00:00","","2013","2013-01-01 00:00:00","20130101","2013-12-31 00:00:00","20131231","201312","20131224","24 Dec 13","24 Dec 2013","Qtr 4, 2013","Dec 2013","26","20131223","2013-12-23 00:00:00","20131229","2013-12-29 00:00:00","6","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","2","2013-10-01 00:00:00","20131001","2013-12-31 00:00:00","20131231","2014","2013-07-01 00:00:00","20130701","2014-06-30 00:00:00","20140630","177","85","24","2","Qtr 2, 2014","41631","Other","10","Other","Other","Other","Other","Other","Other","Other","Other","1","0","0001-01-01 00:00:00.0000000","9999-12-31 23:59:59.9970000","True","True","129017FF7B886E0329B34220C75D2F1D9DDFCE03",""
    "20131225","Normal","2013-12-25 00:00:00","4","25","86","359","WEDNESDAY","WED","W ","Mid","Weekday","2013359","2013041632","Y","N","N","N","N","N","N","N","","","","","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-24 00:00:00","20131224","2013-12-30 00:00:00","20131230","4","13","52","12","DECEMBER","Dec","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","4","2013-10-01 00:00:00","","2013-12-31 00:00:00","","2013","2013-01-01 00:00:00","20130101","2013-12-31 00:00:00","20131231","201312","20131225","25 Dec 13","25 Dec 2013","Qtr 4, 2013","Dec 2013","26","20131223","2013-12-23 00:00:00","20131229","2013-12-29 00:00:00","6","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","2","2013-10-01 00:00:00","20131001","2013-12-31 00:00:00","20131231","2014","2013-07-01 00:00:00","20130701","2014-06-30 00:00:00","20140630","178","86","25","3","Qtr 2, 2014","41632","Other","10","Other","Other","Other","Other","Other","Other","Other","Other","1","0","0001-01-01 00:00:00.0000000","9999-12-31 23:59:59.9970000","True","True","01E254877FD481C26263A89AB4B7926B7A7E89E4",""
    "20131226","Normal","2013-12-26 00:00:00","5","26","87","360","THURSDAY","THU","Th","Mid","Weekday","2013360","2013041633","Y","N","N","N","N","N","N","N","","","","","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-22 00:00:00","20131222","2013-12-28 00:00:00","20131228","2013-12-24 00:00:00","20131224","2013-12-30 00:00:00","20131230","4","13","52","12","DECEMBER","Dec","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","4","2013-10-01 00:00:00","","2013-12-31 00:00:00","","2013","2013-01-01 00:00:00","20130101","2013-12-31 00:00:00","20131231","201312","20131226","26 Dec 13","26 Dec 2013","Qtr 4, 2013","Dec 2013","26","20131223","2013-12-23 00:00:00","20131229","2013-12-29 00:00:00","6","2013-12-01 00:00:00","20131201","2013-12-31 00:00:00","20131231","2","2013-10-01 00:00:00","20131001","2013-12-31 00:00:00","20131231","2014","2013-07-01 00:00:00","20130701","2014-06-30 00:00:00","20140630","179","87","26","4","Qtr 2, 2014","41633","Other","10","Other","Other","Other","Other","Other","Other","Other","Other","1","0","0001-01-01 00:00:00.0000000","9999-12-31 23:59:59.9970000","True","True","0BE30DB239C263AB39FAD5F9E85044556D21D5A5",""

    Like
  • Lokesh, regarding the varbinary fields, the values effectively become varchars in the extract so using a datatype of varchar should suffice.  The TSLOAD utility requires that all date and datetime fields respectively use a consistent format for dates (i.e. date fields must use the same format but datetime fields can use a different format).  You can specify the format mask once per input file for each so it's not possible to specify a different mask per column.  See the help topic here on tsload flags and specifically, the --date_format and --date_time_format flags.  Additionally, to see additional information on masks, see the link to the strptime library function page.

    If the sample rows you included are representative of the entire file, it seems that hyphens only occur in the date values between Year-Month-Day). We generally recommend that the extract be corrected so that you don't have to do any manipulations that may be prone to error or that would hinder streamlined automation of the load process. Reconfiguring the extracts would also be a good opportunity to extract some of the datetime values as pure dates as it seems that the times are not significant.

    For development/testing purposes, one option that would allow you to quickly move forward with your existing extract would be to run a tr command in linux to delete all hyphens. Of course, this only makes sense if you don't expect hyphens anywhere else in the file.  It may be possible to be more selective using the sed utility but that may require a complex regular expression that may be time consuming to write and test.

    using the tr commend would look something like this:
    tr -d '-' < InputFile > OutputFile

    InputFile is the name of your existing file and OutputFile is the name of the new file that will have all hyphens removed. You'll then have to modify your load script to use the OutputFile generated above and set the date_format and date_time_format to expect dates in the format of YYYYMMDD so the masks would look like:
    --date_format "%Y%m%d"
    --date_time_format "%Y%m%d %H:%M:%S"
    (You'll probably also want to use the --skip_second_fraction flag as second fractions appear in some values and don't seem to be needed)

    Like
  • Lokesh Ceeba It might be worth flagging these follow on items as new questions as they are somewhat different to the original thread and may get confusing. Just a thought. 

     

    The one thing I'd add to Wilson's comments is that in the newest releases we do support different formatting for different date/time columns in the same file via a parameter file. What version of ThoughtSpot are you running?

    Thanks,

    Stuart

    Like 1
  • TS 4.5.1.1. Yes we are still facing loading such files with various date and date time formats. 

    Like
  • OK, then you can take advantage of a format file when using tsload. This is provided as an extra parameter.

     

    Firstly I need a file that defines my formats per field. This is an example that shows you the layout (Its a JSON format):

    {
    
      "database": "test_database",
    
      "schema": "test_schema",
    
      "table": "test_table",
    
      "columns": [{
    
        "name": "date_field",
    
        "date_format": "%d/%m/%Y"
    
      }, {
    
        "name": "time_field",
    
        "date_format": "%d/%m/%Y",
    
        "time_format": "%H:%M:%S"
    
      }, {
    
        "name": "datetime_field_1",
    
        "date_format": "%d/%m/%Y",
    
        "datetime_format": "%Y-%m-%d %H:%M:%S"
    
      }, {
    
        "name": "datetime_field_2",
    
        "date_format": "%d/%m/%Y",
    
        "datetime_format": "%C-%b-%d %H:%M"
    
      }]
    
    }

    Replace the database/schema/table with your values. In the columns section provide one entry for each column that has a different date, time or datetime format. NOTE: There is a bug in 4.5.1.1 that means I HAVE to provide a date_format entry for each column even if it's a time or timestamp. This will get fixed. 

    Save the file and make a note of the file name/path. Here I might save it as parameter_file.json

     

    Once I've defined this file I can call it by adding an extra parameter to the tsload command:

     --format_file path/patemeter_file.json

     

    Hope that makes sense. If you want a more detailed chat I'd encourage you to book an office hours session to talk through. 

     

    Stuart

    Like 2
  • Strange Date format - cause the error:

    "0001-01-01 00:00:00.0000000","9999-12-31 23:59:59.9970000"

    ========================

    Started processing data row
    E0918 18:34:39.494042  3050 data_importer.cpp:1718] Row ignored -> "453","1845","0078742018102","HR","0519577691845","COOPER CITY (DAVIE), FL","MEGACENTER","BASE STR MEGACENTER","4700 S FLAMINGO RD","COOPER CITY","FL","33330","4700 S FLAMINGO RD","COOPER CITY","FL","33330","142941","11/25/1992 12:00:00 AM","7038","7023","7023","6014","Y","Y","Y","0","EST","","0","20","26","SC","954 680 7810","5","12","WILLIAM",""BILL" GOMEZ","1A","Southeast","11","SOUTH FLORIDA","109","East","123960","8/31/2011 12:00:00 AM","11/25/1992 12:00:00 AM","7038","7038","7038","7038","6014","KEVIN PERRY","","119","6061","STATESBORO-STORAGE","3060","SAVANNAH-FLOW","26.062256","-80.3106907","True","Yes","10A","11","109","8","16","1","0","0001-01-01 00:00:00.0000000","9999-12-31 23:59:59.9970000","True","True","3C7278952FBC4A2791E06D09ADBAC8274E0D8028",""
    , line no 650 ;Reason: Invalid record, error=Error reading record starting at line:650, line="453","1845","0078742018102","HR","0519577691845","COOPER CITY (DAVIE), FL","MEGACENTER","BASE STR MEGACENTER","4700 S FLAMINGO RD","COOPER CITY","FL","33330","4700 S FLAMINGO RD","COOPER CITY","FL","33330","142941","11/25/1992 12:00:00 AM","7038","7023","7023","6014","Y","Y","Y","0","EST","","0","20","26","SC","954 680 7810","5","12","WILLIAM",""BILL" GOMEZ","1A","Southeast","11","SOUTH FLORIDA","109","East","123960","8/31/2011 12:00:00 AM","11/25/1992 12:00:00 AM","7038","7038","7038","7038","6014","KEVIN PERRY","","119","6061","STATESBORO-STORAGE","3060","SAVANNAH-FLOW","26.062256","-80.3106907","True","Yes","10A","11","109","8","16","1","0","0001-01-01 00:00:00.0000000","9999-12-31 23:59:59.9970000","True","True","3C7278952FBC4A2791E06D09ADBAC8274E0D8028",""
    , tokenizer returned error=unexpected character error 349
    E0918 18:34:39.494305  3050 data_importer.cpp:1745] Stopped processing after 650 lines, Exceeded max allowed errors 0
    Source has 650 data rows, ignored row count 650
    E0918 18:34:39.494721  3050 data_importer.cpp:1823] Extract failed, all rows ignored
    ===========================================================

    --

    LOki

    Like
  • Hi Lokesh,

     

    We don't support fractional seconds. You can tell tsload to ignore them with the --skip_second_fraction option (see here for various tsload options: https://docs.thoughtspot.com/4.5/reference/data-importer-ref.html) .

    It's worth considering coming up with guidelines for data formats to ingest to ThoughtSpot. It's not always easy if you're getting data from a variety of sources, but will make your life easier.

     

    Thanks,

     

    Stuart

    Like
Like2 Follow
  • Status Answered
  • 2 Likes
  • 1 yr agoLast active
  • 42Replies
  • 1513Views
  • 7 Following