Feb 13 07:49:34.546221 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 13 07:49:34.546235 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 07:49:34.546242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:49:34.546246 kernel: BIOS-provided physical RAM map: Feb 13 07:49:34.546249 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 07:49:34.546253 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 07:49:34.546258 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 07:49:34.546262 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 07:49:34.546266 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 07:49:34.546270 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfb3fff] usable Feb 13 07:49:34.546274 kernel: BIOS-e820: [mem 0x000000006dfb4000-0x000000006dfb4fff] ACPI NVS Feb 13 07:49:34.546278 kernel: BIOS-e820: [mem 0x000000006dfb5000-0x000000006dfb5fff] reserved Feb 13 07:49:34.546281 kernel: BIOS-e820: [mem 0x000000006dfb6000-0x0000000077fc6fff] usable Feb 13 07:49:34.546285 kernel: BIOS-e820: [mem 0x0000000077fc7000-0x00000000790a9fff] reserved Feb 13 07:49:34.546291 kernel: BIOS-e820: [mem 0x00000000790aa000-0x0000000079232fff] usable Feb 13 07:49:34.546295 kernel: BIOS-e820: [mem 0x0000000079233000-0x0000000079664fff] ACPI NVS Feb 13 07:49:34.546300 kernel: BIOS-e820: [mem 0x0000000079665000-0x000000007befefff] reserved Feb 13 07:49:34.546304 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Feb 13 07:49:34.546308 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Feb 13 07:49:34.546312 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 07:49:34.546316 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 07:49:34.546320 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 07:49:34.546324 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 07:49:34.546329 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 07:49:34.546333 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Feb 13 07:49:34.546337 kernel: NX (Execute Disable) protection: active Feb 13 07:49:34.546342 kernel: SMBIOS 3.2.1 present. Feb 13 07:49:34.546346 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 13 07:49:34.546350 kernel: tsc: Detected 3400.000 MHz processor Feb 13 07:49:34.546354 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 07:49:34.546358 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 07:49:34.546363 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 07:49:34.546368 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Feb 13 07:49:34.546373 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 07:49:34.546378 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Feb 13 07:49:34.546382 kernel: Using GB pages for direct mapping Feb 13 07:49:34.546386 kernel: ACPI: Early table checksum verification disabled Feb 13 07:49:34.546390 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 07:49:34.546395 kernel: ACPI: XSDT 0x00000000795460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 07:49:34.546399 kernel: ACPI: FACP 0x0000000079582620 000114 (v06 01072009 AMI 00010013) Feb 13 07:49:34.546405 kernel: ACPI: DSDT 0x0000000079546268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 07:49:34.546411 kernel: ACPI: FACS 0x0000000079664F80 000040 Feb 13 07:49:34.546415 kernel: ACPI: APIC 0x0000000079582738 00012C (v04 01072009 AMI 00010013) Feb 13 07:49:34.546420 kernel: ACPI: FPDT 0x0000000079582868 000044 (v01 01072009 AMI 00010013) Feb 13 07:49:34.546425 kernel: ACPI: FIDT 0x00000000795828B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 07:49:34.546430 kernel: ACPI: MCFG 0x0000000079582950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 07:49:34.546434 kernel: ACPI: SPMI 0x0000000079582990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 07:49:34.546440 kernel: ACPI: SSDT 0x00000000795829D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 07:49:34.546445 kernel: ACPI: SSDT 0x00000000795844F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 07:49:34.546449 kernel: ACPI: SSDT 0x00000000795876C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 07:49:34.546454 kernel: ACPI: HPET 0x00000000795899F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:49:34.546458 kernel: ACPI: SSDT 0x0000000079589A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 07:49:34.546463 kernel: ACPI: SSDT 0x000000007958A9D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 07:49:34.546468 kernel: ACPI: UEFI 0x000000007958B2D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:49:34.546472 kernel: ACPI: LPIT 0x000000007958B318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:49:34.546477 kernel: ACPI: SSDT 0x000000007958B3B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 07:49:34.546482 kernel: ACPI: SSDT 0x000000007958DB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 07:49:34.546487 kernel: ACPI: DBGP 0x000000007958F078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:49:34.546492 kernel: ACPI: DBG2 0x000000007958F0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:49:34.546496 kernel: ACPI: SSDT 0x000000007958F108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 07:49:34.546501 kernel: ACPI: DMAR 0x0000000079590C70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 13 07:49:34.546506 kernel: ACPI: SSDT 0x0000000079590D18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 07:49:34.546510 kernel: ACPI: TPM2 0x0000000079590E60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 07:49:34.546515 kernel: ACPI: SSDT 0x0000000079590E98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 07:49:34.546521 kernel: ACPI: WSMT 0x0000000079591C28 000028 (v01 \xf4m 01072009 AMI 00010013) Feb 13 07:49:34.546526 kernel: ACPI: EINJ 0x0000000079591C50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 07:49:34.546530 kernel: ACPI: ERST 0x0000000079591D80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 07:49:34.546535 kernel: ACPI: BERT 0x0000000079591FB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 07:49:34.546540 kernel: ACPI: HEST 0x0000000079591FE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 07:49:34.546544 kernel: ACPI: SSDT 0x0000000079592260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 07:49:34.546549 kernel: ACPI: Reserving FACP table memory at [mem 0x79582620-0x79582733] Feb 13 07:49:34.546554 kernel: ACPI: Reserving DSDT table memory at [mem 0x79546268-0x7958261e] Feb 13 07:49:34.546561 kernel: ACPI: Reserving FACS table memory at [mem 0x79664f80-0x79664fbf] Feb 13 07:49:34.546567 kernel: ACPI: Reserving APIC table memory at [mem 0x79582738-0x79582863] Feb 13 07:49:34.546592 kernel: ACPI: Reserving FPDT table memory at [mem 0x79582868-0x795828ab] Feb 13 07:49:34.546612 kernel: ACPI: Reserving FIDT table memory at [mem 0x795828b0-0x7958294b] Feb 13 07:49:34.546617 kernel: ACPI: Reserving MCFG table memory at [mem 0x79582950-0x7958298b] Feb 13 07:49:34.546639 kernel: ACPI: Reserving SPMI table memory at [mem 0x79582990-0x795829d0] Feb 13 07:49:34.546643 kernel: ACPI: Reserving SSDT table memory at [mem 0x795829d8-0x795844f3] Feb 13 07:49:34.546648 kernel: ACPI: Reserving SSDT table memory at [mem 0x795844f8-0x795876bd] Feb 13 07:49:34.546652 kernel: ACPI: Reserving SSDT table memory at [mem 0x795876c0-0x795899ea] Feb 13 07:49:34.546657 kernel: ACPI: Reserving HPET table memory at [mem 0x795899f0-0x79589a27] Feb 13 07:49:34.546662 kernel: ACPI: Reserving SSDT table memory at [mem 0x79589a28-0x7958a9d5] Feb 13 07:49:34.546667 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958a9d8-0x7958b2ce] Feb 13 07:49:34.546671 kernel: ACPI: Reserving UEFI table memory at [mem 0x7958b2d0-0x7958b311] Feb 13 07:49:34.546676 kernel: ACPI: Reserving LPIT table memory at [mem 0x7958b318-0x7958b3ab] Feb 13 07:49:34.546680 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958b3b0-0x7958db8d] Feb 13 07:49:34.546685 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958db90-0x7958f071] Feb 13 07:49:34.546689 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958f078-0x7958f0ab] Feb 13 07:49:34.546694 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958f0b0-0x7958f103] Feb 13 07:49:34.546698 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958f108-0x79590c6e] Feb 13 07:49:34.546704 kernel: ACPI: Reserving DMAR table memory at [mem 0x79590c70-0x79590d17] Feb 13 07:49:34.546708 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590d18-0x79590e5b] Feb 13 07:49:34.546713 kernel: ACPI: Reserving TPM2 table memory at [mem 0x79590e60-0x79590e93] Feb 13 07:49:34.546717 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590e98-0x79591c26] Feb 13 07:49:34.546722 kernel: ACPI: Reserving WSMT table memory at [mem 0x79591c28-0x79591c4f] Feb 13 07:49:34.546726 kernel: ACPI: Reserving EINJ table memory at [mem 0x79591c50-0x79591d7f] Feb 13 07:49:34.546731 kernel: ACPI: Reserving ERST table memory at [mem 0x79591d80-0x79591faf] Feb 13 07:49:34.546735 kernel: ACPI: Reserving BERT table memory at [mem 0x79591fb0-0x79591fdf] Feb 13 07:49:34.546740 kernel: ACPI: Reserving HEST table memory at [mem 0x79591fe0-0x7959225b] Feb 13 07:49:34.546745 kernel: ACPI: Reserving SSDT table memory at [mem 0x79592260-0x795923c1] Feb 13 07:49:34.546750 kernel: No NUMA configuration found Feb 13 07:49:34.546754 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Feb 13 07:49:34.546759 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Feb 13 07:49:34.546763 kernel: Zone ranges: Feb 13 07:49:34.546768 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 07:49:34.546773 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 07:49:34.546777 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 07:49:34.546782 kernel: Movable zone start for each node Feb 13 07:49:34.546787 kernel: Early memory node ranges Feb 13 07:49:34.546792 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 07:49:34.546796 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 07:49:34.546801 kernel: node 0: [mem 0x0000000040400000-0x000000006dfb3fff] Feb 13 07:49:34.546805 kernel: node 0: [mem 0x000000006dfb6000-0x0000000077fc6fff] Feb 13 07:49:34.546810 kernel: node 0: [mem 0x00000000790aa000-0x0000000079232fff] Feb 13 07:49:34.546814 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Feb 13 07:49:34.546819 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 07:49:34.546824 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Feb 13 07:49:34.546832 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 07:49:34.546837 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 07:49:34.546842 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 07:49:34.546848 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 07:49:34.546853 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 13 07:49:34.546858 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Feb 13 07:49:34.546863 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Feb 13 07:49:34.546868 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Feb 13 07:49:34.546873 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 07:49:34.546878 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 07:49:34.546883 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 07:49:34.546888 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 07:49:34.546893 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 07:49:34.546898 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 07:49:34.546902 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 07:49:34.546907 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 07:49:34.546912 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 07:49:34.546918 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 07:49:34.546922 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 07:49:34.546927 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 07:49:34.546932 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 07:49:34.546937 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 07:49:34.546942 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 07:49:34.546947 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 07:49:34.546952 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 07:49:34.546956 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 07:49:34.546962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 07:49:34.546967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 07:49:34.546972 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 07:49:34.546977 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 07:49:34.546982 kernel: TSC deadline timer available Feb 13 07:49:34.546986 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 07:49:34.546991 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Feb 13 07:49:34.546996 kernel: Booting paravirtualized kernel on bare hardware Feb 13 07:49:34.547001 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 07:49:34.547007 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 07:49:34.547012 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 07:49:34.547017 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 07:49:34.547022 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 07:49:34.547026 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222329 Feb 13 07:49:34.547031 kernel: Policy zone: Normal Feb 13 07:49:34.547037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:49:34.547042 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 07:49:34.547047 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 07:49:34.547052 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 07:49:34.547057 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 07:49:34.547062 kernel: Memory: 32683736K/33411996K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 07:49:34.547067 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 07:49:34.547072 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 07:49:34.547077 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 07:49:34.547082 kernel: rcu: Hierarchical RCU implementation. Feb 13 07:49:34.547087 kernel: rcu: RCU event tracing is enabled. Feb 13 07:49:34.547093 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 07:49:34.547098 kernel: Rude variant of Tasks RCU enabled. Feb 13 07:49:34.547103 kernel: Tracing variant of Tasks RCU enabled. Feb 13 07:49:34.547108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 07:49:34.547113 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 07:49:34.547118 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 07:49:34.547122 kernel: random: crng init done Feb 13 07:49:34.547127 kernel: Console: colour dummy device 80x25 Feb 13 07:49:34.547132 kernel: printk: console [tty0] enabled Feb 13 07:49:34.547138 kernel: printk: console [ttyS1] enabled Feb 13 07:49:34.547143 kernel: ACPI: Core revision 20210730 Feb 13 07:49:34.547148 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 13 07:49:34.547152 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 07:49:34.547157 kernel: DMAR: Host address width 39 Feb 13 07:49:34.547162 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 13 07:49:34.547167 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 13 07:49:34.547172 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 07:49:34.547177 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 07:49:34.547183 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Feb 13 07:49:34.547188 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Feb 13 07:49:34.547193 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 13 07:49:34.547198 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 07:49:34.547202 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 07:49:34.547208 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 07:49:34.547212 kernel: x2apic enabled Feb 13 07:49:34.547217 kernel: Switched APIC routing to cluster x2apic. Feb 13 07:49:34.547222 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 07:49:34.547228 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 07:49:34.547233 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 07:49:34.547238 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 07:49:34.547243 kernel: process: using mwait in idle threads Feb 13 07:49:34.547248 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 07:49:34.547253 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 07:49:34.547257 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 07:49:34.547262 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:49:34.547267 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 07:49:34.547273 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 07:49:34.547278 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 07:49:34.547283 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 07:49:34.547288 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 07:49:34.547293 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 07:49:34.547298 kernel: TAA: Mitigation: TSX disabled Feb 13 07:49:34.547303 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 07:49:34.547308 kernel: SRBDS: Mitigation: Microcode Feb 13 07:49:34.547313 kernel: GDS: Vulnerable: No microcode Feb 13 07:49:34.547318 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 07:49:34.547323 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 07:49:34.547328 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 07:49:34.547333 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 07:49:34.547338 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 07:49:34.547343 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 07:49:34.547348 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 07:49:34.547352 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 07:49:34.547357 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 07:49:34.547363 kernel: Freeing SMP alternatives memory: 32K Feb 13 07:49:34.547368 kernel: pid_max: default: 32768 minimum: 301 Feb 13 07:49:34.547373 kernel: LSM: Security Framework initializing Feb 13 07:49:34.547377 kernel: SELinux: Initializing. Feb 13 07:49:34.547382 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:49:34.547387 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:49:34.547392 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 07:49:34.547397 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 07:49:34.547403 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 07:49:34.547408 kernel: ... version: 4 Feb 13 07:49:34.547413 kernel: ... bit width: 48 Feb 13 07:49:34.547417 kernel: ... generic registers: 4 Feb 13 07:49:34.547422 kernel: ... value mask: 0000ffffffffffff Feb 13 07:49:34.547427 kernel: ... max period: 00007fffffffffff Feb 13 07:49:34.547432 kernel: ... fixed-purpose events: 3 Feb 13 07:49:34.547437 kernel: ... event mask: 000000070000000f Feb 13 07:49:34.547442 kernel: signal: max sigframe size: 2032 Feb 13 07:49:34.547447 kernel: rcu: Hierarchical SRCU implementation. Feb 13 07:49:34.547452 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 07:49:34.547457 kernel: smp: Bringing up secondary CPUs ... Feb 13 07:49:34.547462 kernel: x86: Booting SMP configuration: Feb 13 07:49:34.547467 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 07:49:34.547472 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 07:49:34.547477 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 07:49:34.547482 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 07:49:34.547487 kernel: smpboot: Max logical packages: 1 Feb 13 07:49:34.547492 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 07:49:34.547497 kernel: devtmpfs: initialized Feb 13 07:49:34.547502 kernel: x86/mm: Memory block size: 128MB Feb 13 07:49:34.547507 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfb4000-0x6dfb4fff] (4096 bytes) Feb 13 07:49:34.547512 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79233000-0x79664fff] (4399104 bytes) Feb 13 07:49:34.547517 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 07:49:34.547522 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 07:49:34.547527 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 07:49:34.547532 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 07:49:34.547537 kernel: audit: initializing netlink subsys (disabled) Feb 13 07:49:34.547542 kernel: audit: type=2000 audit(1707810569.119:1): state=initialized audit_enabled=0 res=1 Feb 13 07:49:34.547547 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 07:49:34.547552 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 07:49:34.547558 kernel: cpuidle: using governor menu Feb 13 07:49:34.547563 kernel: ACPI: bus type PCI registered Feb 13 07:49:34.547585 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 07:49:34.547590 kernel: dca service started, version 1.12.1 Feb 13 07:49:34.547595 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 07:49:34.547601 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 07:49:34.547620 kernel: PCI: Using configuration type 1 for base access Feb 13 07:49:34.547624 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 07:49:34.547629 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 07:49:34.547634 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 07:49:34.547639 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 07:49:34.547644 kernel: ACPI: Added _OSI(Module Device) Feb 13 07:49:34.547649 kernel: ACPI: Added _OSI(Processor Device) Feb 13 07:49:34.547654 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 07:49:34.547659 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 07:49:34.547664 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 07:49:34.547669 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 07:49:34.547674 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 07:49:34.547679 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 07:49:34.547684 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:49:34.547688 kernel: ACPI: SSDT 0xFFFF98B700214C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 07:49:34.547693 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 07:49:34.547698 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:49:34.547704 kernel: ACPI: SSDT 0xFFFF98B701CEA800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 07:49:34.547709 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:49:34.547714 kernel: ACPI: SSDT 0xFFFF98B701C5D000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 07:49:34.547718 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:49:34.547723 kernel: ACPI: SSDT 0xFFFF98B701C5B800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 07:49:34.547728 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:49:34.547733 kernel: ACPI: SSDT 0xFFFF98B700149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 07:49:34.547737 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:49:34.547742 kernel: ACPI: SSDT 0xFFFF98B701CEA000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 07:49:34.547747 kernel: ACPI: Interpreter enabled Feb 13 07:49:34.547753 kernel: ACPI: PM: (supports S0 S5) Feb 13 07:49:34.547758 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 07:49:34.547763 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 07:49:34.547768 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 07:49:34.547772 kernel: HEST: Table parsing has been initialized. Feb 13 07:49:34.547777 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 07:49:34.547782 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 07:49:34.547787 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 07:49:34.547792 kernel: ACPI: PM: Power Resource [USBC] Feb 13 07:49:34.547798 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 07:49:34.547802 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 07:49:34.547807 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 07:49:34.547812 kernel: ACPI: PM: Power Resource [WRST] Feb 13 07:49:34.547817 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 07:49:34.547822 kernel: ACPI: PM: Power Resource [FN00] Feb 13 07:49:34.547827 kernel: ACPI: PM: Power Resource [FN01] Feb 13 07:49:34.547832 kernel: ACPI: PM: Power Resource [FN02] Feb 13 07:49:34.547836 kernel: ACPI: PM: Power Resource [FN03] Feb 13 07:49:34.547842 kernel: ACPI: PM: Power Resource [FN04] Feb 13 07:49:34.547847 kernel: ACPI: PM: Power Resource [PIN] Feb 13 07:49:34.547852 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 07:49:34.547915 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 07:49:34.547959 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 07:49:34.547999 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 07:49:34.548006 kernel: PCI host bridge to bus 0000:00 Feb 13 07:49:34.548048 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 07:49:34.548087 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 07:49:34.548123 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 07:49:34.548159 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Feb 13 07:49:34.548194 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 07:49:34.548229 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 07:49:34.548277 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 07:49:34.548327 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 07:49:34.548370 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.548415 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 13 07:49:34.548458 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.548503 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 13 07:49:34.548545 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Feb 13 07:49:34.548610 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 13 07:49:34.548651 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 13 07:49:34.548699 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 07:49:34.548741 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Feb 13 07:49:34.548786 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 07:49:34.548827 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Feb 13 07:49:34.548872 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 07:49:34.548915 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Feb 13 07:49:34.548957 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 07:49:34.549004 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 07:49:34.549045 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Feb 13 07:49:34.549085 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Feb 13 07:49:34.549130 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 07:49:34.549173 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:49:34.549219 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 07:49:34.549260 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:49:34.549305 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 07:49:34.549346 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Feb 13 07:49:34.549394 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 07:49:34.549440 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 07:49:34.549482 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Feb 13 07:49:34.549524 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 07:49:34.549570 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 07:49:34.549612 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Feb 13 07:49:34.549652 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 07:49:34.549698 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 07:49:34.549741 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Feb 13 07:49:34.549782 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Feb 13 07:49:34.549823 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 13 07:49:34.549864 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 13 07:49:34.549905 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 13 07:49:34.549946 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Feb 13 07:49:34.549988 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 07:49:34.550039 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 07:49:34.550082 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.550131 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 07:49:34.550175 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.550221 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 07:49:34.550263 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.550308 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 07:49:34.550350 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.550396 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 13 07:49:34.550440 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.550484 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 07:49:34.550526 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:49:34.550576 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 07:49:34.550623 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 07:49:34.550664 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Feb 13 07:49:34.550706 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 07:49:34.550753 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 07:49:34.550794 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 07:49:34.550836 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:49:34.550883 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 07:49:34.550927 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 07:49:34.550970 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Feb 13 07:49:34.551014 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 13 07:49:34.551058 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:49:34.551102 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:49:34.551149 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 07:49:34.551193 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 07:49:34.551237 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Feb 13 07:49:34.551280 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 13 07:49:34.551322 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:49:34.551367 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:49:34.551409 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 07:49:34.551450 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 07:49:34.551493 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:49:34.551533 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 07:49:34.551583 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:49:34.551627 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Feb 13 07:49:34.551673 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 07:49:34.551716 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Feb 13 07:49:34.551760 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.551822 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 07:49:34.551863 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:49:34.551904 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 07:49:34.551951 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:49:34.551998 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Feb 13 07:49:34.552041 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 07:49:34.552083 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Feb 13 07:49:34.552125 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:49:34.552166 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 07:49:34.552207 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:49:34.552249 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 07:49:34.552289 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 07:49:34.552337 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 07:49:34.552411 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 13 07:49:34.552475 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 07:49:34.552518 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:49:34.552562 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 07:49:34.552620 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 07:49:34.552660 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 07:49:34.552706 kernel: pci_bus 0000:08: extended config space not accessible Feb 13 07:49:34.552757 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 07:49:34.552802 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Feb 13 07:49:34.552846 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Feb 13 07:49:34.552890 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 07:49:34.552935 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 07:49:34.552979 kernel: pci 0000:08:00.0: supports D1 D2 Feb 13 07:49:34.553023 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:49:34.553068 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 07:49:34.553111 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:49:34.553155 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 07:49:34.553163 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 07:49:34.553168 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 07:49:34.553173 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 07:49:34.553179 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 07:49:34.553184 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 07:49:34.553190 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 07:49:34.553196 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 07:49:34.553201 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 07:49:34.553206 kernel: iommu: Default domain type: Translated Feb 13 07:49:34.553211 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 07:49:34.553254 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 13 07:49:34.553300 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 07:49:34.553343 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 13 07:49:34.553351 kernel: vgaarb: loaded Feb 13 07:49:34.553358 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 07:49:34.553363 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 07:49:34.553368 kernel: PTP clock support registered Feb 13 07:49:34.553374 kernel: PCI: Using ACPI for IRQ routing Feb 13 07:49:34.553379 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 07:49:34.553384 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 07:49:34.553389 kernel: e820: reserve RAM buffer [mem 0x6dfb4000-0x6fffffff] Feb 13 07:49:34.553394 kernel: e820: reserve RAM buffer [mem 0x77fc7000-0x77ffffff] Feb 13 07:49:34.553399 kernel: e820: reserve RAM buffer [mem 0x79233000-0x7bffffff] Feb 13 07:49:34.553405 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Feb 13 07:49:34.553410 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Feb 13 07:49:34.553415 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 07:49:34.553421 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 13 07:49:34.553427 kernel: clocksource: Switched to clocksource tsc-early Feb 13 07:49:34.553432 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 07:49:34.553437 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 07:49:34.553443 kernel: pnp: PnP ACPI init Feb 13 07:49:34.553486 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 07:49:34.553531 kernel: pnp 00:02: [dma 0 disabled] Feb 13 07:49:34.553597 kernel: pnp 00:03: [dma 0 disabled] Feb 13 07:49:34.553659 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 07:49:34.553696 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 07:49:34.553736 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 07:49:34.553776 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 07:49:34.553816 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 07:49:34.553853 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 07:49:34.553889 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 07:49:34.553926 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 07:49:34.553962 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 07:49:34.553999 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 07:49:34.554034 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 07:49:34.554076 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 07:49:34.554113 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 07:49:34.554150 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 07:49:34.554186 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 07:49:34.554222 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 07:49:34.554259 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 07:49:34.554297 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 07:49:34.554337 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 07:49:34.554345 kernel: pnp: PnP ACPI: found 10 devices Feb 13 07:49:34.554351 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 07:49:34.554356 kernel: NET: Registered PF_INET protocol family Feb 13 07:49:34.554361 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:49:34.554367 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 07:49:34.554372 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 07:49:34.554378 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:49:34.554384 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 07:49:34.554389 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 07:49:34.554394 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:49:34.554400 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:49:34.554405 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 07:49:34.554410 kernel: NET: Registered PF_XDP protocol family Feb 13 07:49:34.554451 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Feb 13 07:49:34.554493 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Feb 13 07:49:34.554536 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Feb 13 07:49:34.554625 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:49:34.554670 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:49:34.554712 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:49:34.554756 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:49:34.554801 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:49:34.554843 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 07:49:34.554884 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 07:49:34.554926 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:49:34.554989 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 07:49:34.555031 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 07:49:34.555073 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:49:34.555116 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 07:49:34.555160 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 07:49:34.555203 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:49:34.555245 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 07:49:34.555288 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 07:49:34.555332 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 07:49:34.555375 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:49:34.555419 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 07:49:34.555460 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 07:49:34.555504 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 07:49:34.555546 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 07:49:34.555587 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 07:49:34.555625 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 07:49:34.555662 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 07:49:34.555699 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 07:49:34.555737 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Feb 13 07:49:34.555773 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 07:49:34.555819 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Feb 13 07:49:34.555860 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:49:34.555903 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 13 07:49:34.555942 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Feb 13 07:49:34.555984 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 07:49:34.556023 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Feb 13 07:49:34.556065 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 07:49:34.556106 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Feb 13 07:49:34.556147 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 13 07:49:34.556189 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Feb 13 07:49:34.556196 kernel: PCI: CLS 64 bytes, default 64 Feb 13 07:49:34.556202 kernel: DMAR: No ATSR found Feb 13 07:49:34.556208 kernel: DMAR: No SATC found Feb 13 07:49:34.556213 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 13 07:49:34.556220 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 13 07:49:34.556225 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 13 07:49:34.556230 kernel: DMAR: IOMMU feature pasid inconsistent Feb 13 07:49:34.556236 kernel: DMAR: IOMMU feature eafs inconsistent Feb 13 07:49:34.556241 kernel: DMAR: IOMMU feature prs inconsistent Feb 13 07:49:34.556246 kernel: DMAR: IOMMU feature nest inconsistent Feb 13 07:49:34.556252 kernel: DMAR: IOMMU feature mts inconsistent Feb 13 07:49:34.556257 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 13 07:49:34.556262 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 13 07:49:34.556268 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 07:49:34.556274 kernel: DMAR: dmar1: Using Queued invalidation Feb 13 07:49:34.556316 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 07:49:34.556359 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 07:49:34.556402 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 13 07:49:34.556444 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 13 07:49:34.556487 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 13 07:49:34.556528 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 13 07:49:34.556574 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 13 07:49:34.556617 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 13 07:49:34.556659 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 13 07:49:34.556700 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 13 07:49:34.556742 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 13 07:49:34.556783 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 13 07:49:34.556824 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 13 07:49:34.556866 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 13 07:49:34.556907 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 13 07:49:34.556951 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 13 07:49:34.556992 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 13 07:49:34.557034 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 13 07:49:34.557075 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 13 07:49:34.557116 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 13 07:49:34.557158 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 13 07:49:34.557199 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 13 07:49:34.557242 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 13 07:49:34.557287 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 13 07:49:34.557330 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 13 07:49:34.557373 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 07:49:34.557417 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 13 07:49:34.557461 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 13 07:49:34.557506 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 13 07:49:34.557514 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 07:49:34.557520 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 07:49:34.557526 kernel: software IO TLB: mapped [mem 0x0000000073fc7000-0x0000000077fc7000] (64MB) Feb 13 07:49:34.557532 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 13 07:49:34.557537 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 07:49:34.557543 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 07:49:34.557548 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 07:49:34.557553 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 13 07:49:34.557602 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 07:49:34.557610 kernel: Initialise system trusted keyrings Feb 13 07:49:34.557616 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 07:49:34.557622 kernel: Key type asymmetric registered Feb 13 07:49:34.557627 kernel: Asymmetric key parser 'x509' registered Feb 13 07:49:34.557632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 07:49:34.557638 kernel: io scheduler mq-deadline registered Feb 13 07:49:34.557643 kernel: io scheduler kyber registered Feb 13 07:49:34.557648 kernel: io scheduler bfq registered Feb 13 07:49:34.557691 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 13 07:49:34.557733 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 13 07:49:34.557776 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 13 07:49:34.557820 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 13 07:49:34.557861 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 13 07:49:34.557904 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 13 07:49:34.557945 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 13 07:49:34.557992 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 07:49:34.558001 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 07:49:34.558008 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 07:49:34.558013 kernel: pstore: Registered erst as persistent store backend Feb 13 07:49:34.558018 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 07:49:34.558024 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 07:49:34.558029 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 07:49:34.558035 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 07:49:34.558076 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 07:49:34.558084 kernel: i8042: PNP: No PS/2 controller found. Feb 13 07:49:34.558122 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 07:49:34.558161 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 07:49:34.558199 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T07:49:33 UTC (1707810573) Feb 13 07:49:34.558237 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 07:49:34.558245 kernel: fail to initialize ptp_kvm Feb 13 07:49:34.558250 kernel: intel_pstate: Intel P-state driver initializing Feb 13 07:49:34.558256 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 07:49:34.558261 kernel: intel_pstate: HWP enabled Feb 13 07:49:34.558268 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 07:49:34.558273 kernel: vesafb: scrolling: redraw Feb 13 07:49:34.558278 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 07:49:34.558284 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x00000000b896d6f4, using 768k, total 768k Feb 13 07:49:34.558289 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 07:49:34.558294 kernel: fb0: VESA VGA frame buffer device Feb 13 07:49:34.558300 kernel: NET: Registered PF_INET6 protocol family Feb 13 07:49:34.558305 kernel: Segment Routing with IPv6 Feb 13 07:49:34.558310 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 07:49:34.558317 kernel: NET: Registered PF_PACKET protocol family Feb 13 07:49:34.558322 kernel: Key type dns_resolver registered Feb 13 07:49:34.558327 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 07:49:34.558333 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 07:49:34.558338 kernel: IPI shorthand broadcast: enabled Feb 13 07:49:34.558343 kernel: sched_clock: Marking stable (1847416122, 1360184740)->(4631797977, -1424197115) Feb 13 07:49:34.558349 kernel: registered taskstats version 1 Feb 13 07:49:34.558354 kernel: Loading compiled-in X.509 certificates Feb 13 07:49:34.558359 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 07:49:34.558365 kernel: Key type .fscrypt registered Feb 13 07:49:34.558371 kernel: Key type fscrypt-provisioning registered Feb 13 07:49:34.558376 kernel: pstore: Using crash dump compression: deflate Feb 13 07:49:34.558381 kernel: ima: Allocated hash algorithm: sha1 Feb 13 07:49:34.558387 kernel: ima: No architecture policies found Feb 13 07:49:34.558392 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 07:49:34.558398 kernel: Write protecting the kernel read-only data: 28672k Feb 13 07:49:34.558403 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 07:49:34.558408 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 07:49:34.558415 kernel: Run /init as init process Feb 13 07:49:34.558420 kernel: with arguments: Feb 13 07:49:34.558425 kernel: /init Feb 13 07:49:34.558431 kernel: with environment: Feb 13 07:49:34.558436 kernel: HOME=/ Feb 13 07:49:34.558441 kernel: TERM=linux Feb 13 07:49:34.558446 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 07:49:34.558453 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:49:34.558461 systemd[1]: Detected architecture x86-64. Feb 13 07:49:34.558467 systemd[1]: Running in initrd. Feb 13 07:49:34.558472 systemd[1]: No hostname configured, using default hostname. Feb 13 07:49:34.558478 systemd[1]: Hostname set to . Feb 13 07:49:34.558483 systemd[1]: Initializing machine ID from random generator. Feb 13 07:49:34.558489 systemd[1]: Queued start job for default target initrd.target. Feb 13 07:49:34.558494 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:49:34.558500 systemd[1]: Reached target cryptsetup.target. Feb 13 07:49:34.558506 systemd[1]: Reached target paths.target. Feb 13 07:49:34.558511 systemd[1]: Reached target slices.target. Feb 13 07:49:34.558517 systemd[1]: Reached target swap.target. Feb 13 07:49:34.558522 systemd[1]: Reached target timers.target. Feb 13 07:49:34.558528 systemd[1]: Listening on iscsid.socket. Feb 13 07:49:34.558534 systemd[1]: Listening on iscsiuio.socket. Feb 13 07:49:34.558539 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 07:49:34.558545 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 07:49:34.558551 systemd[1]: Listening on systemd-journald.socket. Feb 13 07:49:34.558559 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:49:34.558564 kernel: tsc: Refined TSC clocksource calibration: 3408.087 MHz Feb 13 07:49:34.558570 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:49:34.558576 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x31202649525, max_idle_ns: 440795249258 ns Feb 13 07:49:34.558581 kernel: clocksource: Switched to clocksource tsc Feb 13 07:49:34.558587 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:49:34.558592 systemd[1]: Reached target sockets.target. Feb 13 07:49:34.558598 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:49:34.558604 systemd[1]: Finished network-cleanup.service. Feb 13 07:49:34.558610 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 07:49:34.558615 systemd[1]: Starting systemd-journald.service... Feb 13 07:49:34.558621 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:49:34.558628 systemd-journald[268]: Journal started Feb 13 07:49:34.558655 systemd-journald[268]: Runtime Journal (/run/log/journal/8cece52cc2a4411fb4f61ba1beb6205c) is 8.0M, max 639.3M, 631.3M free. Feb 13 07:49:34.561284 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 07:49:34.567000 audit: BPF prog-id=6 op=LOAD Feb 13 07:49:34.585613 kernel: audit: type=1334 audit(1707810574.567:2): prog-id=6 op=LOAD Feb 13 07:49:34.585628 systemd[1]: Starting systemd-resolved.service... Feb 13 07:49:34.636591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 07:49:34.636607 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 07:49:34.668602 kernel: Bridge firewalling registered Feb 13 07:49:34.668618 systemd[1]: Started systemd-journald.service. Feb 13 07:49:34.682752 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 07:49:34.732101 kernel: audit: type=1130 audit(1707810574.690:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.688742 systemd-resolved[271]: Positive Trust Anchors: Feb 13 07:49:34.807404 kernel: SCSI subsystem initialized Feb 13 07:49:34.807418 kernel: audit: type=1130 audit(1707810574.743:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.807426 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 07:49:34.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.688748 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:49:34.909219 kernel: device-mapper: uevent: version 1.0.3 Feb 13 07:49:34.909230 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 07:49:34.909238 kernel: audit: type=1130 audit(1707810574.864:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.688767 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:49:34.981831 kernel: audit: type=1130 audit(1707810574.917:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.690318 systemd-resolved[271]: Defaulting to hostname 'linux'. Feb 13 07:49:34.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.690780 systemd[1]: Started systemd-resolved.service. Feb 13 07:49:35.089596 kernel: audit: type=1130 audit(1707810574.989:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.089609 kernel: audit: type=1130 audit(1707810575.043:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:34.743746 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:49:34.865412 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 07:49:34.909608 systemd-modules-load[269]: Inserted module 'dm_multipath' Feb 13 07:49:34.917854 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:49:34.990158 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 07:49:35.043831 systemd[1]: Reached target nss-lookup.target. Feb 13 07:49:35.098152 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 07:49:35.118072 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:49:35.126192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:49:35.126904 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:49:35.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.128903 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:49:35.174639 kernel: audit: type=1130 audit(1707810575.126:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.189869 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 07:49:35.256596 kernel: audit: type=1130 audit(1707810575.189:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.248147 systemd[1]: Starting dracut-cmdline.service... Feb 13 07:49:35.271627 dracut-cmdline[293]: dracut-dracut-053 Feb 13 07:49:35.271627 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 07:49:35.271627 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:49:35.338656 kernel: Loading iSCSI transport class v2.0-870. Feb 13 07:49:35.338669 kernel: iscsi: registered transport (tcp) Feb 13 07:49:35.387503 kernel: iscsi: registered transport (qla4xxx) Feb 13 07:49:35.387521 kernel: QLogic iSCSI HBA Driver Feb 13 07:49:35.404102 systemd[1]: Finished dracut-cmdline.service. Feb 13 07:49:35.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:35.414274 systemd[1]: Starting dracut-pre-udev.service... Feb 13 07:49:35.469627 kernel: raid6: avx2x4 gen() 48881 MB/s Feb 13 07:49:35.504592 kernel: raid6: avx2x4 xor() 17614 MB/s Feb 13 07:49:35.539633 kernel: raid6: avx2x2 gen() 53794 MB/s Feb 13 07:49:35.574592 kernel: raid6: avx2x2 xor() 32107 MB/s Feb 13 07:49:35.609591 kernel: raid6: avx2x1 gen() 45270 MB/s Feb 13 07:49:35.644633 kernel: raid6: avx2x1 xor() 27936 MB/s Feb 13 07:49:35.678624 kernel: raid6: sse2x4 gen() 21355 MB/s Feb 13 07:49:35.712632 kernel: raid6: sse2x4 xor() 11992 MB/s Feb 13 07:49:35.746591 kernel: raid6: sse2x2 gen() 21656 MB/s Feb 13 07:49:35.780628 kernel: raid6: sse2x2 xor() 13436 MB/s Feb 13 07:49:35.814627 kernel: raid6: sse2x1 gen() 18306 MB/s Feb 13 07:49:35.866172 kernel: raid6: sse2x1 xor() 8930 MB/s Feb 13 07:49:35.866188 kernel: raid6: using algorithm avx2x2 gen() 53794 MB/s Feb 13 07:49:35.866195 kernel: raid6: .... xor() 32107 MB/s, rmw enabled Feb 13 07:49:35.884221 kernel: raid6: using avx2x2 recovery algorithm Feb 13 07:49:35.930562 kernel: xor: automatically using best checksumming function avx Feb 13 07:49:36.008608 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 07:49:36.013457 systemd[1]: Finished dracut-pre-udev.service. Feb 13 07:49:36.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:36.022000 audit: BPF prog-id=7 op=LOAD Feb 13 07:49:36.022000 audit: BPF prog-id=8 op=LOAD Feb 13 07:49:36.023401 systemd[1]: Starting systemd-udevd.service... Feb 13 07:49:36.031316 systemd-udevd[473]: Using default interface naming scheme 'v252'. Feb 13 07:49:36.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:36.037771 systemd[1]: Started systemd-udevd.service. Feb 13 07:49:36.076698 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Feb 13 07:49:36.051208 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 07:49:36.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:36.079856 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 07:49:36.093743 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:49:36.172327 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:49:36.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:36.199568 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 07:49:36.201565 kernel: libata version 3.00 loaded. Feb 13 07:49:36.201589 kernel: ACPI: bus type USB registered Feb 13 07:49:36.255382 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 07:49:36.255419 kernel: usbcore: registered new interface driver usbfs Feb 13 07:49:36.255428 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 07:49:36.255436 kernel: usbcore: registered new interface driver hub Feb 13 07:49:36.308000 kernel: usbcore: registered new device driver usb Feb 13 07:49:36.308564 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 07:49:36.325615 kernel: pps pps0: new PPS source ptp0 Feb 13 07:49:36.355700 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 13 07:49:36.355774 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:49:36.373476 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:56 Feb 13 07:49:36.407064 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 13 07:49:36.407136 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:49:36.441563 kernel: AES CTR mode by8 optimization enabled Feb 13 07:49:36.441580 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Feb 13 07:49:36.475826 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:49:36.475898 kernel: pps pps1: new PPS source ptp1 Feb 13 07:49:36.503366 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 13 07:49:36.503444 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:49:36.519356 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:57 Feb 13 07:49:36.549016 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 13 07:49:36.549107 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:49:36.566565 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 07:49:36.566661 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 13 07:49:36.579564 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 13 07:49:36.612259 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 07:49:36.612665 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:49:36.640086 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 07:49:36.640169 kernel: scsi host0: ahci Feb 13 07:49:36.667528 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 07:49:36.667611 kernel: scsi host1: ahci Feb 13 07:49:36.678604 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:49:36.678672 kernel: scsi host2: ahci Feb 13 07:49:36.702458 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 07:49:36.722603 kernel: scsi host3: ahci Feb 13 07:49:36.722626 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 07:49:36.722692 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:49:36.745464 kernel: scsi host4: ahci Feb 13 07:49:36.764564 kernel: hub 1-0:1.0: USB hub found Feb 13 07:49:36.764643 kernel: scsi host5: ahci Feb 13 07:49:36.764658 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 13 07:49:36.788752 kernel: hub 1-0:1.0: 16 ports detected Feb 13 07:49:36.800921 kernel: scsi host6: ahci Feb 13 07:49:36.828976 kernel: hub 2-0:1.0: USB hub found Feb 13 07:49:36.829054 kernel: scsi host7: ahci Feb 13 07:49:36.829069 kernel: hub 2-0:1.0: 10 ports detected Feb 13 07:49:36.841321 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 139 Feb 13 07:49:36.880189 kernel: usb: port power management may be unreliable Feb 13 07:49:36.880206 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 139 Feb 13 07:49:36.946099 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 139 Feb 13 07:49:36.946115 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 139 Feb 13 07:49:36.962949 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 139 Feb 13 07:49:36.979666 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 139 Feb 13 07:49:36.996191 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 139 Feb 13 07:49:37.018115 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 139 Feb 13 07:49:37.054563 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:49:37.054652 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 07:49:37.237522 kernel: hub 1-14:1.0: USB hub found Feb 13 07:49:37.237783 kernel: hub 1-14:1.0: 4 ports detected Feb 13 07:49:37.303692 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:49:37.355423 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Feb 13 07:49:37.355537 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:49:37.355597 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 07:49:37.371592 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:49:37.386563 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 07:49:37.401587 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:49:37.416563 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 07:49:37.433598 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 13 07:49:37.447594 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 07:49:37.462618 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 07:49:37.476563 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 07:49:37.493562 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 07:49:37.536922 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:49:37.536937 kernel: ata2.00: Features: NCQ-prio Feb 13 07:49:37.569593 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:49:37.569609 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 07:49:37.569636 kernel: ata1.00: Features: NCQ-prio Feb 13 07:49:37.598590 kernel: ata2.00: configured for UDMA/133 Feb 13 07:49:37.612583 kernel: ata1.00: configured for UDMA/133 Feb 13 07:49:37.612603 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 07:49:37.629609 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:49:37.629679 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 07:49:37.664615 kernel: port_module: 9 callbacks suppressed Feb 13 07:49:37.664630 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 13 07:49:37.708608 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 07:49:37.708660 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:49:37.756897 kernel: usbcore: registered new interface driver usbhid Feb 13 07:49:37.756913 kernel: usbhid: USB HID core driver Feb 13 07:49:37.757564 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:49:37.770853 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:49:37.784502 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:49:37.784587 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:49:37.801607 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 07:49:37.801683 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 07:49:37.817811 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 07:49:37.832109 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 07:49:37.853626 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 07:49:37.853709 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 07:49:37.853720 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 07:49:37.862480 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 07:49:37.877078 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 07:49:37.891308 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 07:49:37.918620 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:49:37.923200 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:49:37.956703 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:49:38.086922 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:49:38.103427 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:49:38.103442 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:49:38.119289 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 07:49:38.135632 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 07:49:38.168502 kernel: GPT:9289727 != 937703087 Feb 13 07:49:38.168516 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 07:49:38.185723 kernel: GPT:9289727 != 937703087 Feb 13 07:49:38.200364 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 07:49:38.216773 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 07:49:38.249493 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:49:38.249507 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 07:49:38.284566 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 13 07:49:38.310466 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 07:49:38.355842 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (660) Feb 13 07:49:38.355878 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 13 07:49:38.333626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 07:49:38.368896 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 07:49:38.393818 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 07:49:38.413999 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:49:38.427918 systemd[1]: Starting disk-uuid.service... Feb 13 07:49:38.465677 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:49:38.465690 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 07:49:38.465740 disk-uuid[691]: Primary Header is updated. Feb 13 07:49:38.465740 disk-uuid[691]: Secondary Entries is updated. Feb 13 07:49:38.465740 disk-uuid[691]: Secondary Header is updated. Feb 13 07:49:38.520643 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:49:38.520653 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 07:49:38.520659 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:49:38.546597 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 07:49:39.526742 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:49:39.545522 disk-uuid[692]: The operation has completed successfully. Feb 13 07:49:39.554808 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 07:49:39.588570 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 07:49:39.683497 kernel: audit: type=1130 audit(1707810579.595:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:39.683527 kernel: audit: type=1131 audit(1707810579.595:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:39.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:39.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:39.588664 systemd[1]: Finished disk-uuid.service. Feb 13 07:49:39.712600 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 07:49:39.598975 systemd[1]: Starting verity-setup.service... Feb 13 07:49:39.786043 systemd[1]: Found device dev-mapper-usr.device. Feb 13 07:49:39.798050 systemd[1]: Mounting sysusr-usr.mount... Feb 13 07:49:39.808738 systemd[1]: Finished verity-setup.service. Feb 13 07:49:39.872675 kernel: audit: type=1130 audit(1707810579.815:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:39.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:39.917226 systemd[1]: Mounted sysusr-usr.mount. Feb 13 07:49:39.931811 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 07:49:39.924853 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 07:49:40.012949 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:49:40.012983 kernel: BTRFS info (device sdb6): using free space tree Feb 13 07:49:40.012999 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 07:49:40.013013 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 07:49:39.925234 systemd[1]: Starting ignition-setup.service... Feb 13 07:49:39.944945 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 07:49:40.091819 kernel: audit: type=1130 audit(1707810580.037:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.022025 systemd[1]: Finished ignition-setup.service. Feb 13 07:49:40.154825 kernel: audit: type=1130 audit(1707810580.100:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.037902 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 07:49:40.163000 audit: BPF prog-id=9 op=LOAD Feb 13 07:49:40.187620 kernel: audit: type=1334 audit(1707810580.163:24): prog-id=9 op=LOAD Feb 13 07:49:40.101337 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 07:49:40.164598 systemd[1]: Starting systemd-networkd.service... Feb 13 07:49:40.209459 systemd-networkd[881]: lo: Link UP Feb 13 07:49:40.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.242793 ignition[871]: Ignition 2.14.0 Feb 13 07:49:40.283778 kernel: audit: type=1130 audit(1707810580.220:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.209462 systemd-networkd[881]: lo: Gained carrier Feb 13 07:49:40.242798 ignition[871]: Stage: fetch-offline Feb 13 07:49:40.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.209773 systemd-networkd[881]: Enumeration completed Feb 13 07:49:40.420316 kernel: audit: type=1130 audit(1707810580.297:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.420335 kernel: audit: type=1130 audit(1707810580.353:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.420354 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 07:49:40.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.242823 ignition[871]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:49:40.443114 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 13 07:49:40.209843 systemd[1]: Started systemd-networkd.service. Feb 13 07:49:40.242837 ignition[871]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:49:40.210497 systemd-networkd[881]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:49:40.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.250322 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:49:40.499679 iscsid[904]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:49:40.499679 iscsid[904]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 07:49:40.499679 iscsid[904]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 07:49:40.499679 iscsid[904]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 07:49:40.499679 iscsid[904]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 07:49:40.499679 iscsid[904]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:49:40.499679 iscsid[904]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 07:49:40.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:40.220781 systemd[1]: Reached target network.target. Feb 13 07:49:40.668727 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 07:49:40.250385 ignition[871]: parsed url from cmdline: "" Feb 13 07:49:40.261538 unknown[871]: fetched base config from "system" Feb 13 07:49:40.250387 ignition[871]: no config URL provided Feb 13 07:49:40.261542 unknown[871]: fetched user config from "system" Feb 13 07:49:40.250390 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 07:49:40.277255 systemd[1]: Starting iscsiuio.service... Feb 13 07:49:40.250410 ignition[871]: parsing config with SHA512: 2097f0f54dfbcd7ce7a2785a465e3d76fa7d46fcce2959f78d22ea220bcbbb8f7133ba7bf8bb88bc1fd06ce874d6fb0a3f3553ecec61701fb0192129757f19f2 Feb 13 07:49:40.290830 systemd[1]: Started iscsiuio.service. Feb 13 07:49:40.261799 ignition[871]: fetch-offline: fetch-offline passed Feb 13 07:49:40.297901 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 07:49:40.261801 ignition[871]: POST message to Packet Timeline Feb 13 07:49:40.353797 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 07:49:40.261806 ignition[871]: POST Status error: resource requires networking Feb 13 07:49:40.354256 systemd[1]: Starting ignition-kargs.service... Feb 13 07:49:40.261835 ignition[871]: Ignition finished successfully Feb 13 07:49:40.421756 systemd-networkd[881]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:49:40.424747 ignition[893]: Ignition 2.14.0 Feb 13 07:49:40.434267 systemd[1]: Starting iscsid.service... Feb 13 07:49:40.424750 ignition[893]: Stage: kargs Feb 13 07:49:40.456984 systemd[1]: Started iscsid.service. Feb 13 07:49:40.424804 ignition[893]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:49:40.471133 systemd[1]: Starting dracut-initqueue.service... Feb 13 07:49:40.424813 ignition[893]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:49:40.489826 systemd[1]: Finished dracut-initqueue.service. Feb 13 07:49:40.426928 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:49:40.507765 systemd[1]: Reached target remote-fs-pre.target. Feb 13 07:49:40.427504 ignition[893]: kargs: kargs passed Feb 13 07:49:40.526722 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:49:40.427507 ignition[893]: POST message to Packet Timeline Feb 13 07:49:40.561857 systemd[1]: Reached target remote-fs.target. Feb 13 07:49:40.427517 ignition[893]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:49:40.591827 systemd[1]: Starting dracut-pre-mount.service... Feb 13 07:49:40.430687 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45075->[::1]:53: read: connection refused Feb 13 07:49:40.606926 systemd[1]: Finished dracut-pre-mount.service. Feb 13 07:49:40.631152 ignition[893]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 07:49:40.663609 systemd-networkd[881]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:49:40.631598 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52486->[::1]:53: read: connection refused Feb 13 07:49:40.692347 systemd-networkd[881]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:49:40.722629 systemd-networkd[881]: enp2s0f1np1: Link UP Feb 13 07:49:40.723064 systemd-networkd[881]: enp2s0f1np1: Gained carrier Feb 13 07:49:40.737065 systemd-networkd[881]: enp2s0f0np0: Link UP Feb 13 07:49:40.737429 systemd-networkd[881]: eno2: Link UP Feb 13 07:49:40.737791 systemd-networkd[881]: eno1: Link UP Feb 13 07:49:41.032457 ignition[893]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 07:49:41.033508 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33132->[::1]:53: read: connection refused Feb 13 07:49:41.463115 systemd-networkd[881]: enp2s0f0np0: Gained carrier Feb 13 07:49:41.472829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 13 07:49:41.501855 systemd-networkd[881]: enp2s0f0np0: DHCPv4 address 147.75.90.7/31, gateway 147.75.90.6 acquired from 145.40.83.140 Feb 13 07:49:41.833939 ignition[893]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 07:49:41.835197 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55071->[::1]:53: read: connection refused Feb 13 07:49:42.402051 systemd-networkd[881]: enp2s0f1np1: Gained IPv6LL Feb 13 07:49:42.914051 systemd-networkd[881]: enp2s0f0np0: Gained IPv6LL Feb 13 07:49:43.436865 ignition[893]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 07:49:43.438059 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58708->[::1]:53: read: connection refused Feb 13 07:49:46.641523 ignition[893]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 07:49:46.680043 ignition[893]: GET result: OK Feb 13 07:49:46.882925 ignition[893]: Ignition finished successfully Feb 13 07:49:46.887279 systemd[1]: Finished ignition-kargs.service. Feb 13 07:49:46.968937 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 07:49:46.968956 kernel: audit: type=1130 audit(1707810586.897:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:46.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:46.906393 ignition[922]: Ignition 2.14.0 Feb 13 07:49:46.899809 systemd[1]: Starting ignition-disks.service... Feb 13 07:49:46.906396 ignition[922]: Stage: disks Feb 13 07:49:46.906454 ignition[922]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:49:46.906464 ignition[922]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:49:46.907798 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:49:46.909194 ignition[922]: disks: disks passed Feb 13 07:49:46.909198 ignition[922]: POST message to Packet Timeline Feb 13 07:49:46.909208 ignition[922]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:49:46.932335 ignition[922]: GET result: OK Feb 13 07:49:47.135334 ignition[922]: Ignition finished successfully Feb 13 07:49:47.138229 systemd[1]: Finished ignition-disks.service. Feb 13 07:49:47.212580 kernel: audit: type=1130 audit(1707810587.150:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.151150 systemd[1]: Reached target initrd-root-device.target. Feb 13 07:49:47.220722 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:49:47.220764 systemd[1]: Reached target local-fs.target. Feb 13 07:49:47.242759 systemd[1]: Reached target sysinit.target. Feb 13 07:49:47.256753 systemd[1]: Reached target basic.target. Feb 13 07:49:47.271532 systemd[1]: Starting systemd-fsck-root.service... Feb 13 07:49:47.291341 systemd-fsck[939]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 07:49:47.304102 systemd[1]: Finished systemd-fsck-root.service. Feb 13 07:49:47.392846 kernel: audit: type=1130 audit(1707810587.313:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.392860 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 07:49:47.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.318645 systemd[1]: Mounting sysroot.mount... Feb 13 07:49:47.401172 systemd[1]: Mounted sysroot.mount. Feb 13 07:49:47.414820 systemd[1]: Reached target initrd-root-fs.target. Feb 13 07:49:47.422462 systemd[1]: Mounting sysroot-usr.mount... Feb 13 07:49:47.436393 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 07:49:47.456622 systemd[1]: Starting flatcar-static-network.service... Feb 13 07:49:47.471713 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 07:49:47.471731 systemd[1]: Reached target ignition-diskful.target. Feb 13 07:49:47.626509 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (952) Feb 13 07:49:47.626526 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:49:47.626534 kernel: BTRFS info (device sdb6): using free space tree Feb 13 07:49:47.626565 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 07:49:47.626575 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 07:49:47.472384 systemd[1]: Mounted sysroot-usr.mount. Feb 13 07:49:47.641829 coreos-metadata[947]: Feb 13 07:49:47.573 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:49:47.641829 coreos-metadata[947]: Feb 13 07:49:47.597 INFO Fetch successful Feb 13 07:49:47.825084 kernel: audit: type=1130 audit(1707810587.651:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.825099 kernel: audit: type=1130 audit(1707810587.714:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.825107 kernel: audit: type=1131 audit(1707810587.714:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.825172 coreos-metadata[946]: Feb 13 07:49:47.577 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:49:47.825172 coreos-metadata[946]: Feb 13 07:49:47.596 INFO Fetch successful Feb 13 07:49:47.825172 coreos-metadata[946]: Feb 13 07:49:47.614 INFO wrote hostname ci-3510.3.2-a-9220aaa15c to /sysroot/etc/hostname Feb 13 07:49:47.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.496739 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:49:47.935675 kernel: audit: type=1130 audit(1707810587.862:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.535182 systemd[1]: Starting initrd-setup-root.service... Feb 13 07:49:47.950717 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 07:49:47.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:47.635977 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 07:49:48.025792 kernel: audit: type=1130 audit(1707810587.958:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:48.025806 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Feb 13 07:49:47.651912 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 07:49:48.045759 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 07:49:48.055726 ignition[1022]: INFO : Ignition 2.14.0 Feb 13 07:49:48.055726 ignition[1022]: INFO : Stage: mount Feb 13 07:49:48.055726 ignition[1022]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:49:48.055726 ignition[1022]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:49:48.055726 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:49:48.055726 ignition[1022]: INFO : mount: mount passed Feb 13 07:49:48.055726 ignition[1022]: INFO : POST message to Packet Timeline Feb 13 07:49:48.055726 ignition[1022]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:49:48.055726 ignition[1022]: INFO : GET result: OK Feb 13 07:49:47.651949 systemd[1]: Finished flatcar-static-network.service. Feb 13 07:49:48.152816 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 07:49:47.714831 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:49:47.833826 systemd[1]: Finished initrd-setup-root.service. Feb 13 07:49:47.863275 systemd[1]: Starting ignition-mount.service... Feb 13 07:49:47.927129 systemd[1]: Starting sysroot-boot.service... Feb 13 07:49:47.942989 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 07:49:47.943037 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 07:49:47.944310 systemd[1]: Finished sysroot-boot.service. Feb 13 07:49:48.219855 ignition[1022]: INFO : Ignition finished successfully Feb 13 07:49:48.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:48.214814 systemd[1]: Finished ignition-mount.service. Feb 13 07:49:48.310736 kernel: audit: type=1130 audit(1707810588.228:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:48.230721 systemd[1]: Starting ignition-files.service... Feb 13 07:49:48.356692 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1034) Feb 13 07:49:48.356703 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:49:48.304503 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:49:48.432609 kernel: BTRFS info (device sdb6): using free space tree Feb 13 07:49:48.432623 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 07:49:48.432632 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 07:49:48.443550 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:49:48.459720 ignition[1053]: INFO : Ignition 2.14.0 Feb 13 07:49:48.459720 ignition[1053]: INFO : Stage: files Feb 13 07:49:48.459720 ignition[1053]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:49:48.459720 ignition[1053]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:49:48.459720 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:49:48.459720 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Feb 13 07:49:48.462160 unknown[1053]: wrote ssh authorized keys file for user: core Feb 13 07:49:48.534744 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 07:49:48.534744 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 07:49:48.534744 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 07:49:48.534744 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 07:49:48.534744 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 07:49:48.534744 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 07:49:48.534744 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 13 07:49:48.943474 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 07:49:49.036957 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 13 07:49:49.062814 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 07:49:49.062814 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 07:49:49.062814 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 13 07:49:49.467251 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 07:49:49.517477 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 13 07:49:49.541857 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 07:49:49.541857 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:49:49.541857 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 13 07:49:49.591772 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 07:49:49.755420 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 13 07:49:49.755420 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:49:49.795768 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:49:49.795768 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 13 07:49:49.826631 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 07:49:50.198920 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 13 07:49:50.198920 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:49:50.247731 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1060) Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3132321775" Feb 13 07:49:50.247745 ignition[1053]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3132321775": device or resource busy Feb 13 07:49:50.247745 ignition[1053]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3132321775", trying btrfs: device or resource busy Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3132321775" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3132321775" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3132321775" Feb 13 07:49:50.247745 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3132321775" Feb 13 07:49:50.562875 kernel: audit: type=1130 audit(1707810590.460:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.441511 systemd[1]: Finished ignition-files.service. Feb 13 07:49:50.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.579929 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(f): [started] processing unit "packet-phone-home.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(f): [finished] processing unit "packet-phone-home.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:49:50.579929 ignition[1053]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:49:50.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.466963 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 07:49:50.990988 ignition[1053]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Feb 13 07:49:50.990988 ignition[1053]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 07:49:50.990988 ignition[1053]: INFO : files: createResultFile: createFiles: op(18): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:49:50.990988 ignition[1053]: INFO : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:49:50.990988 ignition[1053]: INFO : files: files passed Feb 13 07:49:50.990988 ignition[1053]: INFO : POST message to Packet Timeline Feb 13 07:49:50.990988 ignition[1053]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:49:50.990988 ignition[1053]: INFO : GET result: OK Feb 13 07:49:50.990988 ignition[1053]: INFO : Ignition finished successfully Feb 13 07:49:51.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.160161 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 07:49:51.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.527838 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 07:49:50.528204 systemd[1]: Starting ignition-quench.service... Feb 13 07:49:50.549015 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 07:49:50.572983 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 07:49:50.573045 systemd[1]: Finished ignition-quench.service. Feb 13 07:49:51.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.587897 systemd[1]: Reached target ignition-complete.target. Feb 13 07:49:51.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.614407 systemd[1]: Starting initrd-parse-etc.service... Feb 13 07:49:51.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.631098 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 07:49:51.316755 ignition[1103]: INFO : Ignition 2.14.0 Feb 13 07:49:51.316755 ignition[1103]: INFO : Stage: umount Feb 13 07:49:51.316755 ignition[1103]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:49:51.316755 ignition[1103]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:49:51.316755 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:49:51.316755 ignition[1103]: INFO : umount: umount passed Feb 13 07:49:51.316755 ignition[1103]: INFO : POST message to Packet Timeline Feb 13 07:49:51.316755 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:49:51.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.631147 systemd[1]: Finished initrd-parse-etc.service. Feb 13 07:49:51.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.472008 iscsid[904]: iscsid shutting down. Feb 13 07:49:51.486802 ignition[1103]: INFO : GET result: OK Feb 13 07:49:50.662830 systemd[1]: Reached target initrd-fs.target. Feb 13 07:49:50.681832 systemd[1]: Reached target initrd.target. Feb 13 07:49:51.529895 ignition[1103]: INFO : Ignition finished successfully Feb 13 07:49:51.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.700977 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 07:49:51.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.703110 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 07:49:51.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.735818 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 07:49:51.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.588000 audit: BPF prog-id=6 op=UNLOAD Feb 13 07:49:50.754401 systemd[1]: Starting initrd-cleanup.service... Feb 13 07:49:50.778216 systemd[1]: Stopped target nss-lookup.target. Feb 13 07:49:51.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.807856 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 07:49:51.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.833115 systemd[1]: Stopped target timers.target. Feb 13 07:49:51.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.852116 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 07:49:51.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.852469 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 07:49:50.874583 systemd[1]: Stopped target initrd.target. Feb 13 07:49:51.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.896288 systemd[1]: Stopped target basic.target. Feb 13 07:49:51.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.916126 systemd[1]: Stopped target ignition-complete.target. Feb 13 07:49:51.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.937139 systemd[1]: Stopped target ignition-diskful.target. Feb 13 07:49:50.959147 systemd[1]: Stopped target initrd-root-device.target. Feb 13 07:49:51.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:50.981117 systemd[1]: Stopped target remote-fs.target. Feb 13 07:49:50.999144 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 07:49:51.020143 systemd[1]: Stopped target sysinit.target. Feb 13 07:49:51.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.041153 systemd[1]: Stopped target local-fs.target. Feb 13 07:49:51.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.065094 systemd[1]: Stopped target local-fs-pre.target. Feb 13 07:49:51.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.089097 systemd[1]: Stopped target swap.target. Feb 13 07:49:51.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.103014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 07:49:51.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.103371 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 07:49:51.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.119329 systemd[1]: Stopped target cryptsetup.target. Feb 13 07:49:51.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.137028 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 07:49:52.051639 kernel: kauditd_printk_skb: 38 callbacks suppressed Feb 13 07:49:52.051655 kernel: audit: type=1130 audit(1707810591.905:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:52.051664 kernel: audit: type=1131 audit(1707810591.905:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.137382 systemd[1]: Stopped dracut-initqueue.service. Feb 13 07:49:51.152307 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 07:49:51.152686 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 07:49:51.168359 systemd[1]: Stopped target paths.target. Feb 13 07:49:51.188988 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 07:49:51.192939 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 07:49:51.214123 systemd[1]: Stopped target slices.target. Feb 13 07:49:51.228056 systemd[1]: Stopped target sockets.target. Feb 13 07:49:51.244117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 07:49:51.244487 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 07:49:51.262248 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 07:49:51.262607 systemd[1]: Stopped ignition-files.service. Feb 13 07:49:51.277201 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 07:49:52.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.277546 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 07:49:52.229774 kernel: audit: type=1131 audit(1707810592.156:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:51.295449 systemd[1]: Stopping ignition-mount.service... Feb 13 07:49:51.308746 systemd[1]: Stopping iscsid.service... Feb 13 07:49:51.324226 systemd[1]: Stopping sysroot-boot.service... Feb 13 07:49:51.338719 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 07:49:51.338919 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 07:49:51.345988 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 07:49:52.283577 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Feb 13 07:49:51.346108 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 07:49:51.379934 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 07:49:51.381938 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 07:49:51.382150 systemd[1]: Stopped iscsid.service. Feb 13 07:49:51.399100 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 07:49:51.399254 systemd[1]: Closed iscsid.socket. Feb 13 07:49:51.416001 systemd[1]: Stopping iscsiuio.service... Feb 13 07:49:51.432208 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 07:49:51.432408 systemd[1]: Stopped iscsiuio.service. Feb 13 07:49:51.447339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 07:49:51.447525 systemd[1]: Finished initrd-cleanup.service. Feb 13 07:49:51.466740 systemd[1]: Stopped target network.target. Feb 13 07:49:51.479874 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 07:49:51.479961 systemd[1]: Closed iscsiuio.socket. Feb 13 07:49:51.494144 systemd[1]: Stopping systemd-networkd.service... Feb 13 07:49:51.504719 systemd-networkd[881]: enp2s0f1np1: DHCPv6 lease lost Feb 13 07:49:51.508059 systemd[1]: Stopping systemd-resolved.service... Feb 13 07:49:51.517737 systemd-networkd[881]: enp2s0f0np0: DHCPv6 lease lost Feb 13 07:49:51.523488 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 07:49:52.283000 audit: BPF prog-id=9 op=UNLOAD Feb 13 07:49:51.523726 systemd[1]: Stopped systemd-resolved.service. Feb 13 07:49:51.539878 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 07:49:51.540126 systemd[1]: Stopped systemd-networkd.service. Feb 13 07:49:51.555180 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 07:49:51.555454 systemd[1]: Stopped ignition-mount.service. Feb 13 07:49:51.574338 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 07:49:51.574530 systemd[1]: Stopped sysroot-boot.service. Feb 13 07:49:51.589187 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 07:49:51.589265 systemd[1]: Closed systemd-networkd.socket. Feb 13 07:49:51.603788 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 07:49:51.603899 systemd[1]: Stopped ignition-disks.service. Feb 13 07:49:51.618915 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 07:49:51.619046 systemd[1]: Stopped ignition-kargs.service. Feb 13 07:49:51.633940 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 07:49:51.634079 systemd[1]: Stopped ignition-setup.service. Feb 13 07:49:51.650964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 07:49:51.651098 systemd[1]: Stopped initrd-setup-root.service. Feb 13 07:49:51.667684 systemd[1]: Stopping network-cleanup.service... Feb 13 07:49:51.680759 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 07:49:51.680901 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 07:49:51.695903 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:49:51.696020 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:49:51.712262 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 07:49:51.712417 systemd[1]: Stopped systemd-modules-load.service. Feb 13 07:49:51.729162 systemd[1]: Stopping systemd-udevd.service... Feb 13 07:49:51.747466 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 07:49:51.748907 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 07:49:51.749218 systemd[1]: Stopped systemd-udevd.service. Feb 13 07:49:51.762369 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 07:49:51.762488 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 07:49:51.775892 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 07:49:51.775990 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 07:49:51.790819 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 07:49:51.790930 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 07:49:51.807795 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 07:49:51.807836 systemd[1]: Stopped dracut-cmdline.service. Feb 13 07:49:51.822671 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 07:49:51.822704 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 07:49:51.838490 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 07:49:51.852675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 07:49:51.852701 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 07:49:51.852784 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 07:49:51.852803 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 07:49:51.874844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 07:49:51.874965 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 07:49:51.893061 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 07:49:51.894271 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 07:49:51.894456 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 07:49:52.138827 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 07:49:52.138866 systemd[1]: Stopped network-cleanup.service. Feb 13 07:49:52.156790 systemd[1]: Reached target initrd-switch-root.target. Feb 13 07:49:52.222061 systemd[1]: Starting initrd-switch-root.service... Feb 13 07:49:52.242007 systemd[1]: Switching root. Feb 13 07:49:52.285391 systemd-journald[268]: Journal stopped Feb 13 07:49:56.164120 kernel: audit: type=1334 audit(1707810592.283:82): prog-id=9 op=UNLOAD Feb 13 07:49:56.164135 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 07:49:56.164142 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 07:49:56.164148 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 07:49:56.164152 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 07:49:56.164158 kernel: SELinux: policy capability open_perms=1 Feb 13 07:49:56.164164 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 07:49:56.164170 kernel: SELinux: policy capability always_check_network=0 Feb 13 07:49:56.164176 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 07:49:56.164181 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 07:49:56.164186 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 07:49:56.164191 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 07:49:56.164197 kernel: audit: type=1403 audit(1707810592.656:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 07:49:56.164203 systemd[1]: Successfully loaded SELinux policy in 302.857ms. Feb 13 07:49:56.164212 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.779ms. Feb 13 07:49:56.164218 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:49:56.164225 systemd[1]: Detected architecture x86-64. Feb 13 07:49:56.164230 systemd[1]: Detected first boot. Feb 13 07:49:56.164236 systemd[1]: Hostname set to . Feb 13 07:49:56.164243 systemd[1]: Initializing machine ID from random generator. Feb 13 07:49:56.164249 kernel: audit: type=1400 audit(1707810592.956:84): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:49:56.164255 kernel: audit: type=1400 audit(1707810593.011:85): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:49:56.164261 kernel: audit: type=1400 audit(1707810593.011:86): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:49:56.164266 kernel: audit: type=1334 audit(1707810593.112:87): prog-id=10 op=LOAD Feb 13 07:49:56.164272 kernel: audit: type=1334 audit(1707810593.112:88): prog-id=10 op=UNLOAD Feb 13 07:49:56.164280 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 07:49:56.164292 systemd[1]: Populated /etc with preset unit settings. Feb 13 07:49:56.164299 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:49:56.164305 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:49:56.164312 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:49:56.164318 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 07:49:56.164324 systemd[1]: Stopped initrd-switch-root.service. Feb 13 07:49:56.164330 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 07:49:56.164337 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 07:49:56.164344 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 07:49:56.164350 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 07:49:56.164356 systemd[1]: Created slice system-getty.slice. Feb 13 07:49:56.164361 systemd[1]: Created slice system-modprobe.slice. Feb 13 07:49:56.164367 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 07:49:56.164373 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 07:49:56.164381 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 07:49:56.164388 systemd[1]: Created slice user.slice. Feb 13 07:49:56.164395 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:49:56.164401 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 07:49:56.164407 systemd[1]: Set up automount boot.automount. Feb 13 07:49:56.164413 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 07:49:56.164419 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 07:49:56.164425 systemd[1]: Stopped target initrd-fs.target. Feb 13 07:49:56.164431 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 07:49:56.164437 systemd[1]: Reached target integritysetup.target. Feb 13 07:49:56.164445 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:49:56.164456 systemd[1]: Reached target remote-fs.target. Feb 13 07:49:56.164463 systemd[1]: Reached target slices.target. Feb 13 07:49:56.164470 systemd[1]: Reached target swap.target. Feb 13 07:49:56.164476 systemd[1]: Reached target torcx.target. Feb 13 07:49:56.164482 systemd[1]: Reached target veritysetup.target. Feb 13 07:49:56.164488 systemd[1]: Listening on systemd-coredump.socket. Feb 13 07:49:56.164495 systemd[1]: Listening on systemd-initctl.socket. Feb 13 07:49:56.164501 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:49:56.164508 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:49:56.164515 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:49:56.164521 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 07:49:56.164529 systemd[1]: Mounting dev-hugepages.mount... Feb 13 07:49:56.164535 systemd[1]: Mounting dev-mqueue.mount... Feb 13 07:49:56.164541 systemd[1]: Mounting media.mount... Feb 13 07:49:56.164548 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:49:56.164554 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 07:49:56.164565 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 07:49:56.164571 systemd[1]: Mounting tmp.mount... Feb 13 07:49:56.164578 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 07:49:56.164608 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 07:49:56.164631 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:49:56.164637 systemd[1]: Starting modprobe@configfs.service... Feb 13 07:49:56.164643 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 07:49:56.164654 systemd[1]: Starting modprobe@drm.service... Feb 13 07:49:56.164662 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 07:49:56.164669 systemd[1]: Starting modprobe@fuse.service... Feb 13 07:49:56.164675 kernel: fuse: init (API version 7.34) Feb 13 07:49:56.164681 systemd[1]: Starting modprobe@loop.service... Feb 13 07:49:56.164687 kernel: loop: module loaded Feb 13 07:49:56.164694 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 07:49:56.164701 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 07:49:56.164707 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 07:49:56.164713 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 07:49:56.164719 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 07:49:56.164726 systemd[1]: Stopped systemd-journald.service. Feb 13 07:49:56.164732 systemd[1]: Starting systemd-journald.service... Feb 13 07:49:56.164738 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:49:56.164747 systemd-journald[1255]: Journal started Feb 13 07:49:56.164776 systemd-journald[1255]: Runtime Journal (/run/log/journal/57933ae8f13d4b3cb1bc53541060e4f2) is 8.0M, max 639.3M, 631.3M free. Feb 13 07:49:52.656000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 07:49:52.956000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:49:53.011000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:49:53.011000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:49:53.112000 audit: BPF prog-id=10 op=LOAD Feb 13 07:49:53.112000 audit: BPF prog-id=10 op=UNLOAD Feb 13 07:49:53.155000 audit: BPF prog-id=11 op=LOAD Feb 13 07:49:53.155000 audit: BPF prog-id=11 op=UNLOAD Feb 13 07:49:53.235000 audit[1144]: AVC avc: denied { associate } for pid=1144 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 07:49:53.235000 audit[1144]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d989c a1=c00015adf8 a2=c000163ac0 a3=32 items=0 ppid=1127 pid=1144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:49:53.235000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:49:53.260000 audit[1144]: AVC avc: denied { associate } for pid=1144 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 07:49:53.260000 audit[1144]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9975 a2=1ed a3=0 items=2 ppid=1127 pid=1144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:49:53.260000 audit: CWD cwd="/" Feb 13 07:49:53.260000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:53.260000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:53.260000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:49:54.805000 audit: BPF prog-id=12 op=LOAD Feb 13 07:49:54.805000 audit: BPF prog-id=3 op=UNLOAD Feb 13 07:49:54.805000 audit: BPF prog-id=13 op=LOAD Feb 13 07:49:54.805000 audit: BPF prog-id=14 op=LOAD Feb 13 07:49:54.805000 audit: BPF prog-id=4 op=UNLOAD Feb 13 07:49:54.805000 audit: BPF prog-id=5 op=UNLOAD Feb 13 07:49:54.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:54.854000 audit: BPF prog-id=12 op=UNLOAD Feb 13 07:49:54.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:54.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.137000 audit: BPF prog-id=15 op=LOAD Feb 13 07:49:56.138000 audit: BPF prog-id=16 op=LOAD Feb 13 07:49:56.138000 audit: BPF prog-id=17 op=LOAD Feb 13 07:49:56.138000 audit: BPF prog-id=13 op=UNLOAD Feb 13 07:49:56.138000 audit: BPF prog-id=14 op=UNLOAD Feb 13 07:49:56.161000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 07:49:56.161000 audit[1255]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc83e3e7f0 a2=4000 a3=7ffc83e3e88c items=0 ppid=1 pid=1255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:49:56.161000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 07:49:54.804067 systemd[1]: Queued start job for default target multi-user.target. Feb 13 07:49:53.233872 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:49:54.806810 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 07:49:53.234215 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:49:53.234226 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:49:53.234243 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 07:49:53.234249 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 07:49:53.234281 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 07:49:53.234287 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 07:49:53.234422 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 07:49:53.234443 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:49:53.234450 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:49:53.234868 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 07:49:53.234885 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 07:49:53.234895 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 07:49:53.234902 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 07:49:53.234910 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 07:49:53.234917 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 07:49:54.432418 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:49:54.432565 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:49:54.432624 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:49:54.432718 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:49:54.432748 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 07:49:54.432781 /usr/lib/systemd/system-generators/torcx-generator[1144]: time="2024-02-13T07:49:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 07:49:56.196747 systemd[1]: Starting systemd-network-generator.service... Feb 13 07:49:56.218605 systemd[1]: Starting systemd-remount-fs.service... Feb 13 07:49:56.239603 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:49:56.273038 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 07:49:56.273058 systemd[1]: Stopped verity-setup.service. Feb 13 07:49:56.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.307604 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:49:56.321607 systemd[1]: Started systemd-journald.service. Feb 13 07:49:56.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.330081 systemd[1]: Mounted dev-hugepages.mount. Feb 13 07:49:56.337820 systemd[1]: Mounted dev-mqueue.mount. Feb 13 07:49:56.344820 systemd[1]: Mounted media.mount. Feb 13 07:49:56.351805 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 07:49:56.360804 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 07:49:56.369780 systemd[1]: Mounted tmp.mount. Feb 13 07:49:56.376887 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 07:49:56.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.385922 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:49:56.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.394970 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 07:49:56.395106 systemd[1]: Finished modprobe@configfs.service. Feb 13 07:49:56.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.404168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 07:49:56.404380 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 07:49:56.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.413248 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 07:49:56.413494 systemd[1]: Finished modprobe@drm.service. Feb 13 07:49:56.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.422464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 07:49:56.422784 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 07:49:56.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.431365 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 07:49:56.431678 systemd[1]: Finished modprobe@fuse.service. Feb 13 07:49:56.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.440349 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 07:49:56.440659 systemd[1]: Finished modprobe@loop.service. Feb 13 07:49:56.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.449378 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:49:56.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.458366 systemd[1]: Finished systemd-network-generator.service. Feb 13 07:49:56.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.467349 systemd[1]: Finished systemd-remount-fs.service. Feb 13 07:49:56.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.476346 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:49:56.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.485964 systemd[1]: Reached target network-pre.target. Feb 13 07:49:56.497352 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 07:49:56.508228 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 07:49:56.514864 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 07:49:56.518056 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 07:49:56.527099 systemd[1]: Starting systemd-journal-flush.service... Feb 13 07:49:56.535686 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 07:49:56.536190 systemd[1]: Starting systemd-random-seed.service... Feb 13 07:49:56.536296 systemd-journald[1255]: Time spent on flushing to /var/log/journal/57933ae8f13d4b3cb1bc53541060e4f2 is 15.685ms for 1619 entries. Feb 13 07:49:56.536296 systemd-journald[1255]: System Journal (/var/log/journal/57933ae8f13d4b3cb1bc53541060e4f2) is 8.0M, max 195.6M, 187.6M free. Feb 13 07:49:56.581564 systemd-journald[1255]: Received client request to flush runtime journal. Feb 13 07:49:56.550693 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 07:49:56.551160 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:49:56.562153 systemd[1]: Starting systemd-sysusers.service... Feb 13 07:49:56.569145 systemd[1]: Starting systemd-udev-settle.service... Feb 13 07:49:56.576573 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 07:49:56.584727 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 07:49:56.592814 systemd[1]: Finished systemd-journal-flush.service. Feb 13 07:49:56.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.600792 systemd[1]: Finished systemd-random-seed.service. Feb 13 07:49:56.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.608812 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:49:56.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.616757 systemd[1]: Finished systemd-sysusers.service. Feb 13 07:49:56.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.625809 systemd[1]: Reached target first-boot-complete.target. Feb 13 07:49:56.634321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:49:56.643559 udevadm[1270]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 07:49:56.652334 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:49:56.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.835804 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 07:49:56.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.844000 audit: BPF prog-id=18 op=LOAD Feb 13 07:49:56.845000 audit: BPF prog-id=19 op=LOAD Feb 13 07:49:56.845000 audit: BPF prog-id=7 op=UNLOAD Feb 13 07:49:56.845000 audit: BPF prog-id=8 op=UNLOAD Feb 13 07:49:56.846037 systemd[1]: Starting systemd-udevd.service... Feb 13 07:49:56.857317 systemd-udevd[1274]: Using default interface naming scheme 'v252'. Feb 13 07:49:56.875006 systemd[1]: Started systemd-udevd.service. Feb 13 07:49:56.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:56.884763 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 07:49:56.885000 audit: BPF prog-id=20 op=LOAD Feb 13 07:49:56.886061 systemd[1]: Starting systemd-networkd.service... Feb 13 07:49:56.907000 audit: BPF prog-id=21 op=LOAD Feb 13 07:49:56.911567 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 07:49:56.911610 kernel: kauditd_printk_skb: 65 callbacks suppressed Feb 13 07:49:56.911633 kernel: audit: type=1334 audit(1707810596.907:145): prog-id=21 op=LOAD Feb 13 07:49:56.911650 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 07:49:56.911664 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1287) Feb 13 07:49:56.937000 audit: BPF prog-id=22 op=LOAD Feb 13 07:49:56.971465 kernel: audit: type=1334 audit(1707810596.937:146): prog-id=22 op=LOAD Feb 13 07:49:56.971542 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 07:49:56.971627 kernel: audit: type=1334 audit(1707810596.971:147): prog-id=23 op=LOAD Feb 13 07:49:56.971000 audit: BPF prog-id=23 op=LOAD Feb 13 07:49:56.971987 systemd[1]: Starting systemd-userdbd.service... Feb 13 07:49:57.006566 kernel: ACPI: button: Power Button [PWRF] Feb 13 07:49:57.006649 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 07:49:56.917000 audit[1334]: AVC avc: denied { confidentiality } for pid=1334 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:49:57.092997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:49:57.112570 kernel: audit: type=1400 audit(1707810596.917:148): avc: denied { confidentiality } for pid=1334 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:49:57.131593 kernel: IPMI message handler: version 39.2 Feb 13 07:49:57.141578 systemd[1]: Started systemd-userdbd.service. Feb 13 07:49:56.917000 audit[1334]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55583297d360 a1=4d8bc a2=7f30091d1bc5 a3=5 items=42 ppid=1274 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:49:57.166566 kernel: ipmi device interface Feb 13 07:49:57.166621 kernel: audit: type=1300 audit(1707810596.917:148): arch=c000003e syscall=175 success=yes exit=0 a0=55583297d360 a1=4d8bc a2=7f30091d1bc5 a3=5 items=42 ppid=1274 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:49:56.917000 audit: CWD cwd="/" Feb 13 07:49:57.247046 kernel: audit: type=1307 audit(1707810596.917:148): cwd="/" Feb 13 07:49:57.247084 kernel: audit: type=1302 audit(1707810596.917:148): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=1 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:57.340787 kernel: audit: type=1302 audit(1707810596.917:148): item=1 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:57.340832 kernel: audit: type=1302 audit(1707810596.917:148): item=2 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=2 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:57.389155 kernel: audit: type=1302 audit(1707810596.917:148): item=3 name=(null) inode=18710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=3 name=(null) inode=18710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=4 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=5 name=(null) inode=18711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=6 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=7 name=(null) inode=18712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=8 name=(null) inode=18712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=9 name=(null) inode=18713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=10 name=(null) inode=18712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=11 name=(null) inode=18714 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=12 name=(null) inode=18712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=13 name=(null) inode=18715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=14 name=(null) inode=18712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=15 name=(null) inode=18716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=16 name=(null) inode=18712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=17 name=(null) inode=18717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=18 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=19 name=(null) inode=18718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=20 name=(null) inode=18718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=21 name=(null) inode=18719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=22 name=(null) inode=18718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=23 name=(null) inode=18720 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=24 name=(null) inode=18718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=25 name=(null) inode=18721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=26 name=(null) inode=18718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=27 name=(null) inode=18722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=28 name=(null) inode=18718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=29 name=(null) inode=18723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=30 name=(null) inode=18709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=31 name=(null) inode=18724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=32 name=(null) inode=18724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=33 name=(null) inode=18725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=34 name=(null) inode=18724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=35 name=(null) inode=18726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=36 name=(null) inode=18724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=37 name=(null) inode=18727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=38 name=(null) inode=18724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=39 name=(null) inode=18728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=40 name=(null) inode=18724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PATH item=41 name=(null) inode=18729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:49:56.917000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 07:49:57.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:57.457566 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 07:49:57.457683 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 07:49:57.457764 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 07:49:57.457837 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 07:49:57.458564 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 07:49:57.597393 kernel: ipmi_si: IPMI System Interface driver Feb 13 07:49:57.597496 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 07:49:57.597602 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 07:49:57.618392 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 07:49:57.658144 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 07:49:57.658271 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 07:49:57.674721 systemd-networkd[1313]: bond0: netdev ready Feb 13 07:49:57.676674 systemd-networkd[1313]: lo: Link UP Feb 13 07:49:57.676677 systemd-networkd[1313]: lo: Gained carrier Feb 13 07:49:57.677136 systemd-networkd[1313]: Enumeration completed Feb 13 07:49:57.677192 systemd[1]: Started systemd-networkd.service. Feb 13 07:49:57.677417 systemd-networkd[1313]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 07:49:57.678079 systemd-networkd[1313]: enp2s0f1np1: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:7b.network. Feb 13 07:49:57.680608 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 07:49:57.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:57.748126 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 07:49:57.748275 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 07:49:57.772759 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 07:49:57.773591 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 13 07:49:57.822563 kernel: intel_rapl_common: Found RAPL domain package Feb 13 07:49:57.822603 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 07:49:57.822698 kernel: intel_rapl_common: Found RAPL domain core Feb 13 07:49:57.822711 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 07:49:57.824564 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 13 07:49:57.824601 systemd-networkd[1313]: enp2s0f0np0: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:7a.network. Feb 13 07:49:57.890591 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 13 07:49:57.890684 kernel: intel_rapl_common: Found RAPL domain uncore Feb 13 07:49:57.890706 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:49:57.987563 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 07:49:57.987662 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 07:49:57.989563 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 13 07:49:57.990561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 07:49:57.991541 systemd-networkd[1313]: bond0: Link UP Feb 13 07:49:57.991736 systemd-networkd[1313]: enp2s0f1np1: Link UP Feb 13 07:49:57.991872 systemd-networkd[1313]: enp2s0f0np0: Link UP Feb 13 07:49:57.991981 systemd-networkd[1313]: enp2s0f1np1: Gained carrier Feb 13 07:49:57.992955 systemd-networkd[1313]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:ce:f6:07:a6:7a.network. Feb 13 07:49:57.993562 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 07:49:57.993583 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:49:58.150565 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 07:49:58.150606 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 07:49:58.181834 systemd[1]: Finished systemd-udev-settle.service. Feb 13 07:49:58.190636 kernel: bond0: active interface up! Feb 13 07:49:58.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:58.199318 systemd[1]: Starting lvm2-activation-early.service... Feb 13 07:49:58.215194 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:49:58.273124 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:49:58.273161 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:49:58.298953 systemd[1]: Finished lvm2-activation-early.service. Feb 13 07:49:58.320619 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:58.336696 systemd[1]: Reached target cryptsetup.target. Feb 13 07:49:58.345569 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.362239 systemd[1]: Starting lvm2-activation.service... Feb 13 07:49:58.364372 lvm[1378]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:49:58.370597 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.394608 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.417562 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.439606 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.461571 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.461927 systemd[1]: Finished lvm2-activation.service. Feb 13 07:49:58.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:58.479681 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:49:58.484608 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.502602 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 07:49:58.502614 systemd[1]: Reached target local-fs.target. Feb 13 07:49:58.507605 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.524623 systemd[1]: Reached target machines.target. Feb 13 07:49:58.529619 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.547217 systemd[1]: Starting ldconfig.service... Feb 13 07:49:58.550608 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.566168 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 07:49:58.566196 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:49:58.566730 systemd[1]: Starting systemd-boot-update.service... Feb 13 07:49:58.571565 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.587098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 07:49:58.592626 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.612227 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 07:49:58.613564 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.613579 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:49:58.613613 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:49:58.614102 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 07:49:58.614301 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1380 (bootctl) Feb 13 07:49:58.614880 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 07:49:58.633575 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.634237 systemd-networkd[1313]: bond0: Gained carrier Feb 13 07:49:58.634345 systemd-networkd[1313]: enp2s0f0np0: Gained carrier Feb 13 07:49:58.634597 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 07:49:58.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:58.640280 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 07:49:58.647262 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 07:49:58.648467 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 07:49:58.669695 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:49:58.669725 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 13 07:49:58.688389 systemd-networkd[1313]: enp2s0f1np1: Link DOWN Feb 13 07:49:58.688392 systemd-networkd[1313]: enp2s0f1np1: Lost carrier Feb 13 07:49:58.688562 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 07:49:58.929669 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 07:49:58.949594 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 13 07:49:58.949654 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Feb 13 07:49:58.964598 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 13 07:49:58.965527 systemd-networkd[1313]: enp2s0f1np1: Link UP Feb 13 07:49:58.965739 systemd-networkd[1313]: enp2s0f1np1: Gained carrier Feb 13 07:49:58.998586 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 07:49:59.027806 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 07:49:59.028248 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 07:49:59.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:59.051821 systemd-fsck[1388]: fsck.fat 4.2 (2021-01-31) Feb 13 07:49:59.051821 systemd-fsck[1388]: /dev/sdb1: 789 files, 115339/258078 clusters Feb 13 07:49:59.052526 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 07:49:59.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:59.064354 systemd[1]: Mounting boot.mount... Feb 13 07:49:59.076896 systemd[1]: Mounted boot.mount. Feb 13 07:49:59.096375 systemd[1]: Finished systemd-boot-update.service. Feb 13 07:49:59.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:59.127703 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 07:49:59.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:49:59.137323 systemd[1]: Starting audit-rules.service... Feb 13 07:49:59.145262 systemd[1]: Starting clean-ca-certificates.service... Feb 13 07:49:59.155168 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 07:49:59.158000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 07:49:59.158000 audit[1409]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9ce13970 a2=420 a3=0 items=0 ppid=1392 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:49:59.158000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 07:49:59.159592 augenrules[1409]: No rules Feb 13 07:49:59.164504 systemd[1]: Starting systemd-resolved.service... Feb 13 07:49:59.172471 systemd[1]: Starting systemd-timesyncd.service... Feb 13 07:49:59.181077 systemd[1]: Starting systemd-update-utmp.service... Feb 13 07:49:59.187841 systemd[1]: Finished audit-rules.service. Feb 13 07:49:59.194719 systemd[1]: Finished clean-ca-certificates.service. Feb 13 07:49:59.203707 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 07:49:59.212396 ldconfig[1379]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 07:49:59.213786 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 07:49:59.214325 systemd[1]: Finished systemd-update-utmp.service. Feb 13 07:49:59.222777 systemd[1]: Finished ldconfig.service. Feb 13 07:49:59.230289 systemd[1]: Starting systemd-update-done.service... Feb 13 07:49:59.236767 systemd[1]: Finished systemd-update-done.service. Feb 13 07:49:59.241155 systemd-resolved[1414]: Positive Trust Anchors: Feb 13 07:49:59.241161 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:49:59.241180 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:49:59.244702 systemd[1]: Started systemd-timesyncd.service. Feb 13 07:49:59.245384 systemd-resolved[1414]: Using system hostname 'ci-3510.3.2-a-9220aaa15c'. Feb 13 07:49:59.252713 systemd[1]: Started systemd-resolved.service. Feb 13 07:49:59.260679 systemd[1]: Reached target network.target. Feb 13 07:49:59.268642 systemd[1]: Reached target nss-lookup.target. Feb 13 07:49:59.276647 systemd[1]: Reached target sysinit.target. Feb 13 07:49:59.284685 systemd[1]: Started motdgen.path. Feb 13 07:49:59.291653 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 07:49:59.301635 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 07:49:59.309627 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 07:49:59.309642 systemd[1]: Reached target paths.target. Feb 13 07:49:59.316636 systemd[1]: Reached target time-set.target. Feb 13 07:49:59.324695 systemd[1]: Started logrotate.timer. Feb 13 07:49:59.331692 systemd[1]: Started mdadm.timer. Feb 13 07:49:59.338634 systemd[1]: Reached target timers.target. Feb 13 07:49:59.345766 systemd[1]: Listening on dbus.socket. Feb 13 07:49:59.353144 systemd[1]: Starting docker.socket... Feb 13 07:49:59.361055 systemd[1]: Listening on sshd.socket. Feb 13 07:49:59.367710 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:49:59.367922 systemd[1]: Listening on docker.socket. Feb 13 07:49:59.374700 systemd[1]: Reached target sockets.target. Feb 13 07:49:59.382655 systemd[1]: Reached target basic.target. Feb 13 07:49:59.389690 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:49:59.389703 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:49:59.390131 systemd[1]: Starting containerd.service... Feb 13 07:49:59.397049 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 07:49:59.406149 systemd[1]: Starting coreos-metadata.service... Feb 13 07:49:59.413135 systemd[1]: Starting dbus.service... Feb 13 07:49:59.419095 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 07:49:59.424105 jq[1430]: false Feb 13 07:49:59.425848 coreos-metadata[1423]: Feb 13 07:49:59.425 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:49:59.427116 systemd[1]: Starting extend-filesystems.service... Feb 13 07:49:59.431790 dbus-daemon[1429]: [system] SELinux support is enabled Feb 13 07:49:59.434648 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 07:49:59.435124 coreos-metadata[1426]: Feb 13 07:49:59.435 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:49:59.435228 extend-filesystems[1432]: Found sda Feb 13 07:49:59.435228 extend-filesystems[1432]: Found sdb Feb 13 07:49:59.460641 extend-filesystems[1432]: Found sdb1 Feb 13 07:49:59.460641 extend-filesystems[1432]: Found sdb2 Feb 13 07:49:59.460641 extend-filesystems[1432]: Found sdb3 Feb 13 07:49:59.460641 extend-filesystems[1432]: Found usr Feb 13 07:49:59.460641 extend-filesystems[1432]: Found sdb4 Feb 13 07:49:59.460641 extend-filesystems[1432]: Found sdb6 Feb 13 07:49:59.460641 extend-filesystems[1432]: Found sdb7 Feb 13 07:49:59.460641 extend-filesystems[1432]: Found sdb9 Feb 13 07:49:59.460641 extend-filesystems[1432]: Checking size of /dev/sdb9 Feb 13 07:49:59.460641 extend-filesystems[1432]: Resized partition /dev/sdb9 Feb 13 07:49:59.566680 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 13 07:49:59.435408 systemd[1]: Starting motdgen.service... Feb 13 07:49:59.566756 extend-filesystems[1444]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 07:49:59.443528 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 07:49:59.474380 systemd[1]: Starting prepare-critools.service... Feb 13 07:49:59.493250 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 07:49:59.512156 systemd[1]: Starting sshd-keygen.service... Feb 13 07:49:59.532112 systemd[1]: Starting systemd-logind.service... Feb 13 07:49:59.545678 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:49:59.546179 systemd[1]: Starting tcsd.service... Feb 13 07:49:59.555667 systemd-logind[1459]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 07:49:59.582261 jq[1462]: true Feb 13 07:49:59.555676 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 07:49:59.555686 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 07:49:59.555801 systemd-logind[1459]: New seat seat0. Feb 13 07:49:59.558839 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 07:49:59.559183 systemd[1]: Starting update-engine.service... Feb 13 07:49:59.574271 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 07:49:59.589943 systemd[1]: Started dbus.service. Feb 13 07:49:59.598346 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 07:49:59.598466 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 07:49:59.598659 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 07:49:59.598759 systemd[1]: Finished motdgen.service. Feb 13 07:49:59.605709 update_engine[1461]: I0213 07:49:59.605237 1461 main.cc:92] Flatcar Update Engine starting Feb 13 07:49:59.606455 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 07:49:59.606561 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 07:49:59.608498 update_engine[1461]: I0213 07:49:59.608457 1461 update_check_scheduler.cc:74] Next update check in 2m0s Feb 13 07:49:59.611524 tar[1464]: ./ Feb 13 07:49:59.611524 tar[1464]: ./loopback Feb 13 07:49:59.617414 jq[1468]: true Feb 13 07:49:59.617575 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 07:49:59.618741 tar[1465]: crictl Feb 13 07:49:59.622545 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 07:49:59.622664 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 07:49:59.622751 systemd[1]: Started update-engine.service. Feb 13 07:49:59.627939 env[1469]: time="2024-02-13T07:49:59.627909577Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 07:49:59.631125 tar[1464]: ./bandwidth Feb 13 07:49:59.634330 systemd[1]: Started systemd-logind.service. Feb 13 07:49:59.637669 env[1469]: time="2024-02-13T07:49:59.637628244Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 07:49:59.638119 env[1469]: time="2024-02-13T07:49:59.638078031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:49:59.638776 env[1469]: time="2024-02-13T07:49:59.638732721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:49:59.638776 env[1469]: time="2024-02-13T07:49:59.638746711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:49:59.640414 env[1469]: time="2024-02-13T07:49:59.640370188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:49:59.640414 env[1469]: time="2024-02-13T07:49:59.640383286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 07:49:59.640414 env[1469]: time="2024-02-13T07:49:59.640391414Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 07:49:59.640414 env[1469]: time="2024-02-13T07:49:59.640397067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 07:49:59.640498 env[1469]: time="2024-02-13T07:49:59.640437219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:49:59.640613 env[1469]: time="2024-02-13T07:49:59.640575131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:49:59.640645 env[1469]: time="2024-02-13T07:49:59.640637243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:49:59.642532 env[1469]: time="2024-02-13T07:49:59.640646066Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 07:49:59.642568 env[1469]: time="2024-02-13T07:49:59.642540625Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 07:49:59.642568 env[1469]: time="2024-02-13T07:49:59.642551662Z" level=info msg="metadata content store policy set" policy=shared Feb 13 07:49:59.644528 systemd[1]: Started locksmithd.service. Feb 13 07:49:59.647468 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:49:59.651690 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 07:49:59.651799 systemd[1]: Reached target system-config.target. Feb 13 07:49:59.656088 env[1469]: time="2024-02-13T07:49:59.656071254Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 07:49:59.656130 env[1469]: time="2024-02-13T07:49:59.656091645Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 07:49:59.656130 env[1469]: time="2024-02-13T07:49:59.656099417Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 07:49:59.656130 env[1469]: time="2024-02-13T07:49:59.656118327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656185 env[1469]: time="2024-02-13T07:49:59.656127788Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656185 env[1469]: time="2024-02-13T07:49:59.656141619Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656185 env[1469]: time="2024-02-13T07:49:59.656149777Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656185 env[1469]: time="2024-02-13T07:49:59.656157369Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656185 env[1469]: time="2024-02-13T07:49:59.656164887Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656185 env[1469]: time="2024-02-13T07:49:59.656174106Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656185 env[1469]: time="2024-02-13T07:49:59.656181374Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656313 env[1469]: time="2024-02-13T07:49:59.656188043Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 07:49:59.656313 env[1469]: time="2024-02-13T07:49:59.656245848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 07:49:59.656466 env[1469]: time="2024-02-13T07:49:59.656451526Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 07:49:59.656689 env[1469]: time="2024-02-13T07:49:59.656675707Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 07:49:59.656765 env[1469]: time="2024-02-13T07:49:59.656751642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656798 env[1469]: time="2024-02-13T07:49:59.656770061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 07:49:59.656827 env[1469]: time="2024-02-13T07:49:59.656813804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656856 env[1469]: time="2024-02-13T07:49:59.656827481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656856 env[1469]: time="2024-02-13T07:49:59.656839692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656856 env[1469]: time="2024-02-13T07:49:59.656851841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656943 env[1469]: time="2024-02-13T07:49:59.656866938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656943 env[1469]: time="2024-02-13T07:49:59.656880904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656943 env[1469]: time="2024-02-13T07:49:59.656892004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656943 env[1469]: time="2024-02-13T07:49:59.656903697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.656943 env[1469]: time="2024-02-13T07:49:59.656916855Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 07:49:59.657083 env[1469]: time="2024-02-13T07:49:59.657011687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.657083 env[1469]: time="2024-02-13T07:49:59.657025514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.657083 env[1469]: time="2024-02-13T07:49:59.657037038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.657083 env[1469]: time="2024-02-13T07:49:59.657049566Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 07:49:59.657083 env[1469]: time="2024-02-13T07:49:59.657062399Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 07:49:59.657083 env[1469]: time="2024-02-13T07:49:59.657074890Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 07:49:59.657239 env[1469]: time="2024-02-13T07:49:59.657089852Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 07:49:59.657239 env[1469]: time="2024-02-13T07:49:59.657117527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 07:49:59.657335 env[1469]: time="2024-02-13T07:49:59.657287683Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657344459Z" level=info msg="Connect containerd service" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657371190Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657684127Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657766377Z" level=info msg="Start subscribing containerd event" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657794450Z" level=info msg="Start recovering state" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657815513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657824352Z" level=info msg="Start event monitor" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657834195Z" level=info msg="Start snapshots syncer" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657840520Z" level=info msg="Start cni network conf syncer for default" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657845281Z" level=info msg="Start streaming server" Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657836952Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 07:49:59.659363 env[1469]: time="2024-02-13T07:49:59.657911509Z" level=info msg="containerd successfully booted in 0.030453s" Feb 13 07:49:59.659668 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 07:49:59.659746 systemd[1]: Reached target user-config.target. Feb 13 07:49:59.663339 tar[1464]: ./ptp Feb 13 07:49:59.669510 systemd[1]: Started containerd.service. Feb 13 07:49:59.676860 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 07:49:59.681615 systemd-networkd[1313]: bond0: Gained IPv6LL Feb 13 07:49:59.686134 tar[1464]: ./vlan Feb 13 07:49:59.706976 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 07:49:59.708004 tar[1464]: ./host-device Feb 13 07:49:59.729248 tar[1464]: ./tuning Feb 13 07:49:59.748034 tar[1464]: ./vrf Feb 13 07:49:59.767709 tar[1464]: ./sbr Feb 13 07:49:59.786961 tar[1464]: ./tap Feb 13 07:49:59.808981 tar[1464]: ./dhcp Feb 13 07:49:59.865007 tar[1464]: ./static Feb 13 07:49:59.880905 tar[1464]: ./firewall Feb 13 07:49:59.901947 systemd[1]: Finished prepare-critools.service. Feb 13 07:49:59.905285 tar[1464]: ./macvlan Feb 13 07:49:59.927282 tar[1464]: ./dummy Feb 13 07:49:59.945565 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 13 07:49:59.976597 extend-filesystems[1444]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 13 07:49:59.976597 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 07:49:59.976597 extend-filesystems[1444]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 13 07:50:00.013656 extend-filesystems[1432]: Resized filesystem in /dev/sdb9 Feb 13 07:50:00.021634 tar[1464]: ./bridge Feb 13 07:50:00.021634 tar[1464]: ./ipvlan Feb 13 07:49:59.977097 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 07:49:59.977188 systemd[1]: Finished extend-filesystems.service. Feb 13 07:50:00.025900 tar[1464]: ./portmap Feb 13 07:50:00.046639 tar[1464]: ./host-local Feb 13 07:50:00.070845 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 07:50:00.353676 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 07:50:00.728954 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 07:50:00.740127 systemd[1]: Finished sshd-keygen.service. Feb 13 07:50:00.747676 systemd[1]: Starting issuegen.service... Feb 13 07:50:00.754957 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 07:50:00.755030 systemd[1]: Finished issuegen.service. Feb 13 07:50:00.763483 systemd[1]: Starting systemd-user-sessions.service... Feb 13 07:50:00.771959 systemd[1]: Finished systemd-user-sessions.service. Feb 13 07:50:00.782400 systemd[1]: Started getty@tty1.service. Feb 13 07:50:00.790410 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 07:50:00.799850 systemd[1]: Reached target getty.target. Feb 13 07:50:05.378917 coreos-metadata[1426]: Feb 13 07:50:05.378 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 07:50:05.379742 coreos-metadata[1423]: Feb 13 07:50:05.378 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 07:50:05.812047 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:50:05.818356 login[1532]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:50:05.819237 systemd[1]: Created slice user-500.slice. Feb 13 07:50:05.819795 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 07:50:05.820791 systemd-logind[1459]: New session 1 of user core. Feb 13 07:50:05.822473 systemd-logind[1459]: New session 2 of user core. Feb 13 07:50:05.842169 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 07:50:05.842900 systemd[1]: Starting user@500.service... Feb 13 07:50:05.844858 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:50:05.921510 systemd[1535]: Queued start job for default target default.target. Feb 13 07:50:05.921734 systemd[1535]: Reached target paths.target. Feb 13 07:50:05.921745 systemd[1535]: Reached target sockets.target. Feb 13 07:50:05.921752 systemd[1535]: Reached target timers.target. Feb 13 07:50:05.921759 systemd[1535]: Reached target basic.target. Feb 13 07:50:05.921778 systemd[1535]: Reached target default.target. Feb 13 07:50:05.921791 systemd[1535]: Startup finished in 74ms. Feb 13 07:50:05.921835 systemd[1]: Started user@500.service. Feb 13 07:50:05.922372 systemd[1]: Started session-1.scope. Feb 13 07:50:05.922722 systemd[1]: Started session-2.scope. Feb 13 07:50:06.319884 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 13 07:50:06.320051 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 13 07:50:06.379229 coreos-metadata[1426]: Feb 13 07:50:06.379 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 07:50:06.379520 coreos-metadata[1423]: Feb 13 07:50:06.379 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 07:50:06.426518 coreos-metadata[1423]: Feb 13 07:50:06.426 INFO Fetch successful Feb 13 07:50:06.427363 coreos-metadata[1426]: Feb 13 07:50:06.426 INFO Fetch successful Feb 13 07:50:06.452803 unknown[1423]: wrote ssh authorized keys file for user: core Feb 13 07:50:06.462154 update-ssh-keys[1555]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:50:06.462360 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 07:50:06.649815 systemd-timesyncd[1415]: Contacted time server 155.248.196.28:123 (0.flatcar.pool.ntp.org). Feb 13 07:50:06.649946 systemd-timesyncd[1415]: Initial clock synchronization to Tue 2024-02-13 07:50:06.374053 UTC. Feb 13 07:50:06.697060 systemd[1]: Finished coreos-metadata.service. Feb 13 07:50:06.698238 systemd[1]: Started packet-phone-home.service. Feb 13 07:50:06.698481 systemd[1]: Reached target multi-user.target. Feb 13 07:50:06.699433 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 07:50:06.704637 curl[1558]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 07:50:06.704834 curl[1558]: Dload Upload Total Spent Left Speed Feb 13 07:50:06.705301 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 07:50:06.705423 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 07:50:06.705553 systemd[1]: Startup finished in 2.011s (kernel) + 18.509s (initrd) + 14.371s (userspace) = 34.891s. Feb 13 07:50:07.084130 systemd[1]: Created slice system-sshd.slice. Feb 13 07:50:07.084657 systemd[1]: Started sshd@0-147.75.90.7:22-139.178.68.195:50234.service. Feb 13 07:50:07.126606 sshd[1561]: Accepted publickey for core from 139.178.68.195 port 50234 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:50:07.127786 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:50:07.131898 systemd-logind[1459]: New session 3 of user core. Feb 13 07:50:07.132983 systemd[1]: Started session-3.scope. Feb 13 07:50:07.187965 systemd[1]: Started sshd@1-147.75.90.7:22-139.178.68.195:49334.service. Feb 13 07:50:07.219287 sshd[1566]: Accepted publickey for core from 139.178.68.195 port 49334 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:50:07.219926 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:50:07.221949 systemd-logind[1459]: New session 4 of user core. Feb 13 07:50:07.222363 systemd[1]: Started session-4.scope. Feb 13 07:50:07.270894 sshd[1566]: pam_unix(sshd:session): session closed for user core Feb 13 07:50:07.273055 systemd[1]: sshd@1-147.75.90.7:22-139.178.68.195:49334.service: Deactivated successfully. Feb 13 07:50:07.273532 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 07:50:07.274009 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Feb 13 07:50:07.274805 systemd[1]: Started sshd@2-147.75.90.7:22-139.178.68.195:49348.service. Feb 13 07:50:07.275453 systemd-logind[1459]: Removed session 4. Feb 13 07:50:07.313952 sshd[1572]: Accepted publickey for core from 139.178.68.195 port 49348 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:50:07.315778 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:50:07.323428 systemd-logind[1459]: New session 5 of user core. Feb 13 07:50:07.325462 systemd[1]: Started session-5.scope. Feb 13 07:50:07.325839 curl[1558]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 07:50:07.327602 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 07:50:07.388511 sshd[1572]: pam_unix(sshd:session): session closed for user core Feb 13 07:50:07.390067 systemd[1]: sshd@2-147.75.90.7:22-139.178.68.195:49348.service: Deactivated successfully. Feb 13 07:50:07.390343 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 07:50:07.390652 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Feb 13 07:50:07.391111 systemd[1]: Started sshd@3-147.75.90.7:22-139.178.68.195:49358.service. Feb 13 07:50:07.391493 systemd-logind[1459]: Removed session 5. Feb 13 07:50:07.423543 sshd[1579]: Accepted publickey for core from 139.178.68.195 port 49358 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:50:07.424510 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:50:07.427532 systemd-logind[1459]: New session 6 of user core. Feb 13 07:50:07.428342 systemd[1]: Started session-6.scope. Feb 13 07:50:07.492801 sshd[1579]: pam_unix(sshd:session): session closed for user core Feb 13 07:50:07.499174 systemd[1]: sshd@3-147.75.90.7:22-139.178.68.195:49358.service: Deactivated successfully. Feb 13 07:50:07.500719 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 07:50:07.502314 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Feb 13 07:50:07.504749 systemd[1]: Started sshd@4-147.75.90.7:22-139.178.68.195:49360.service. Feb 13 07:50:07.506878 systemd-logind[1459]: Removed session 6. Feb 13 07:50:07.540296 sshd[1585]: Accepted publickey for core from 139.178.68.195 port 49360 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:50:07.541077 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:50:07.543498 systemd-logind[1459]: New session 7 of user core. Feb 13 07:50:07.544118 systemd[1]: Started session-7.scope. Feb 13 07:50:07.622535 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 07:50:07.623142 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 07:50:14.137078 systemd[1]: Reloading. Feb 13 07:50:14.169910 /usr/lib/systemd/system-generators/torcx-generator[1617]: time="2024-02-13T07:50:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:50:14.169929 /usr/lib/systemd/system-generators/torcx-generator[1617]: time="2024-02-13T07:50:14Z" level=info msg="torcx already run" Feb 13 07:50:14.239333 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:50:14.239345 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:50:14.254890 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:50:14.308869 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 07:50:14.312606 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 07:50:14.313159 systemd[1]: Reached target network-online.target. Feb 13 07:50:14.313854 systemd[1]: Started kubelet.service. Feb 13 07:50:14.342018 kubelet[1677]: E0213 07:50:14.341992 1677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 07:50:14.343361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 07:50:14.343426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 07:50:14.835246 systemd[1]: Stopped kubelet.service. Feb 13 07:50:14.875551 systemd[1]: Reloading. Feb 13 07:50:14.938491 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2024-02-13T07:50:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:50:14.938516 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2024-02-13T07:50:14Z" level=info msg="torcx already run" Feb 13 07:50:14.994115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:50:14.994126 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:50:15.008552 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:50:15.063874 systemd[1]: Started kubelet.service. Feb 13 07:50:15.085736 kubelet[1834]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:50:15.085736 kubelet[1834]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 07:50:15.085736 kubelet[1834]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:50:15.085736 kubelet[1834]: I0213 07:50:15.085695 1834 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 07:50:15.378761 kubelet[1834]: I0213 07:50:15.378731 1834 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 13 07:50:15.378761 kubelet[1834]: I0213 07:50:15.378744 1834 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 07:50:15.378898 kubelet[1834]: I0213 07:50:15.378861 1834 server.go:895] "Client rotation is on, will bootstrap in background" Feb 13 07:50:15.379830 kubelet[1834]: I0213 07:50:15.379816 1834 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 07:50:15.399909 kubelet[1834]: I0213 07:50:15.399873 1834 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 07:50:15.399978 kubelet[1834]: I0213 07:50:15.399966 1834 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 07:50:15.400078 kubelet[1834]: I0213 07:50:15.400044 1834 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 07:50:15.400078 kubelet[1834]: I0213 07:50:15.400055 1834 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 07:50:15.400078 kubelet[1834]: I0213 07:50:15.400061 1834 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 07:50:15.400181 kubelet[1834]: I0213 07:50:15.400105 1834 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:50:15.400181 kubelet[1834]: I0213 07:50:15.400143 1834 kubelet.go:393] "Attempting to sync node with API server" Feb 13 07:50:15.400181 kubelet[1834]: I0213 07:50:15.400151 1834 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 07:50:15.400181 kubelet[1834]: I0213 07:50:15.400163 1834 kubelet.go:309] "Adding apiserver pod source" Feb 13 07:50:15.400181 kubelet[1834]: I0213 07:50:15.400171 1834 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 07:50:15.400300 kubelet[1834]: E0213 07:50:15.400251 1834 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:15.400300 kubelet[1834]: E0213 07:50:15.400258 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:15.400389 kubelet[1834]: I0213 07:50:15.400381 1834 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 07:50:15.400498 kubelet[1834]: W0213 07:50:15.400493 1834 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 07:50:15.400744 kubelet[1834]: I0213 07:50:15.400737 1834 server.go:1232] "Started kubelet" Feb 13 07:50:15.400808 kubelet[1834]: I0213 07:50:15.400802 1834 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 07:50:15.400849 kubelet[1834]: I0213 07:50:15.400843 1834 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 07:50:15.401110 kubelet[1834]: I0213 07:50:15.401100 1834 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 07:50:15.401225 kubelet[1834]: E0213 07:50:15.401215 1834 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 07:50:15.401256 kubelet[1834]: E0213 07:50:15.401237 1834 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 07:50:15.402112 kubelet[1834]: I0213 07:50:15.402103 1834 server.go:462] "Adding debug handlers to kubelet server" Feb 13 07:50:15.411335 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 07:50:15.411402 kubelet[1834]: W0213 07:50:15.411386 1834 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 07:50:15.411402 kubelet[1834]: W0213 07:50:15.411397 1834 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 07:50:15.411467 kubelet[1834]: E0213 07:50:15.411409 1834 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 07:50:15.411467 kubelet[1834]: E0213 07:50:15.411409 1834 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 07:50:15.411467 kubelet[1834]: I0213 07:50:15.411412 1834 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 07:50:15.411467 kubelet[1834]: I0213 07:50:15.411449 1834 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 07:50:15.411569 kubelet[1834]: E0213 07:50:15.411406 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d29a0094", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 400726676, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 400726676, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.411569 kubelet[1834]: E0213 07:50:15.411516 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:15.411569 kubelet[1834]: I0213 07:50:15.411548 1834 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 07:50:15.411696 kubelet[1834]: I0213 07:50:15.411636 1834 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 07:50:15.411932 kubelet[1834]: E0213 07:50:15.411902 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d2a1acea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 401229546, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 401229546, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.412007 kubelet[1834]: E0213 07:50:15.412002 1834 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.80.11\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 07:50:15.412078 kubelet[1834]: W0213 07:50:15.412069 1834 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 07:50:15.412111 kubelet[1834]: E0213 07:50:15.412082 1834 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 07:50:15.420163 kubelet[1834]: I0213 07:50:15.420148 1834 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 07:50:15.420163 kubelet[1834]: I0213 07:50:15.420156 1834 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 07:50:15.420163 kubelet[1834]: I0213 07:50:15.420165 1834 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:50:15.420550 kubelet[1834]: E0213 07:50:15.420423 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be329f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419875999, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419875999, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.421328 kubelet[1834]: E0213 07:50:15.421293 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4782", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419881346, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419881346, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.421788 kubelet[1834]: E0213 07:50:15.421734 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4ffc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419883516, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419883516, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.436679 kubelet[1834]: I0213 07:50:15.436641 1834 policy_none.go:49] "None policy: Start" Feb 13 07:50:15.436959 kubelet[1834]: I0213 07:50:15.436950 1834 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 07:50:15.436992 kubelet[1834]: I0213 07:50:15.436963 1834 state_mem.go:35] "Initializing new in-memory state store" Feb 13 07:50:15.439274 systemd[1]: Created slice kubepods.slice. Feb 13 07:50:15.441472 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 07:50:15.442789 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 07:50:15.457099 kubelet[1834]: I0213 07:50:15.457054 1834 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 07:50:15.457229 kubelet[1834]: I0213 07:50:15.457192 1834 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 07:50:15.457448 kubelet[1834]: E0213 07:50:15.457438 1834 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.11\" not found" Feb 13 07:50:15.459663 kubelet[1834]: E0213 07:50:15.459582 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d602bdb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 457922480, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 457922480, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.494345 kubelet[1834]: I0213 07:50:15.494327 1834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 07:50:15.495089 kubelet[1834]: I0213 07:50:15.495078 1834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 07:50:15.495137 kubelet[1834]: I0213 07:50:15.495096 1834 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 07:50:15.495137 kubelet[1834]: I0213 07:50:15.495109 1834 kubelet.go:2303] "Starting kubelet main sync loop" Feb 13 07:50:15.495189 kubelet[1834]: E0213 07:50:15.495144 1834 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 07:50:15.496861 kubelet[1834]: W0213 07:50:15.496821 1834 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 07:50:15.496861 kubelet[1834]: E0213 07:50:15.496838 1834 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 07:50:15.513772 kubelet[1834]: I0213 07:50:15.513689 1834 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 13 07:50:15.515605 kubelet[1834]: E0213 07:50:15.515516 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 13 07:50:15.516552 kubelet[1834]: E0213 07:50:15.516365 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be329f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419875999, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 513614517, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be329f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.518652 kubelet[1834]: E0213 07:50:15.518480 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4782", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419881346, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 513631536, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be4782" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.520809 kubelet[1834]: E0213 07:50:15.520633 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4ffc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419883516, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 513637727, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be4ffc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.614823 kubelet[1834]: E0213 07:50:15.614727 1834 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.80.11\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 13 07:50:15.717670 kubelet[1834]: I0213 07:50:15.717405 1834 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 13 07:50:15.720055 kubelet[1834]: E0213 07:50:15.719982 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 13 07:50:15.720322 kubelet[1834]: E0213 07:50:15.720057 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be329f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419875999, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 717297434, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be329f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.722683 kubelet[1834]: E0213 07:50:15.722453 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4782", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419881346, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 717320384, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be4782" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:15.724839 kubelet[1834]: E0213 07:50:15.724650 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4ffc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419883516, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 717331690, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be4ffc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:16.017394 kubelet[1834]: E0213 07:50:16.017177 1834 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.80.11\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 13 07:50:16.121282 kubelet[1834]: I0213 07:50:16.121222 1834 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 13 07:50:16.124004 kubelet[1834]: E0213 07:50:16.123918 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 13 07:50:16.124186 kubelet[1834]: E0213 07:50:16.123917 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be329f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419875999, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 16, 121134748, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be329f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:16.126433 kubelet[1834]: E0213 07:50:16.126243 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4782", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419881346, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 16, 121150777, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be4782" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:16.128576 kubelet[1834]: E0213 07:50:16.128370 1834 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b35cb4d3be4ffc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 50, 15, 419883516, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 50, 16, 121157708, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.67.80.11"}': 'events "10.67.80.11.17b35cb4d3be4ffc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 07:50:16.256596 kubelet[1834]: W0213 07:50:16.256491 1834 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 07:50:16.256596 kubelet[1834]: E0213 07:50:16.256580 1834 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 07:50:16.284174 kubelet[1834]: W0213 07:50:16.283975 1834 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 07:50:16.284174 kubelet[1834]: E0213 07:50:16.284041 1834 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 07:50:16.380274 kubelet[1834]: I0213 07:50:16.380153 1834 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 07:50:16.400602 kubelet[1834]: E0213 07:50:16.400522 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:16.792635 kubelet[1834]: E0213 07:50:16.792522 1834 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.80.11" not found Feb 13 07:50:16.825889 kubelet[1834]: E0213 07:50:16.825786 1834 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.11\" not found" node="10.67.80.11" Feb 13 07:50:16.925959 kubelet[1834]: I0213 07:50:16.925864 1834 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 13 07:50:16.932698 kubelet[1834]: I0213 07:50:16.932610 1834 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.11" Feb 13 07:50:16.941347 kubelet[1834]: E0213 07:50:16.941259 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.042514 kubelet[1834]: E0213 07:50:17.042398 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.088081 sudo[1588]: pam_unix(sudo:session): session closed for user root Feb 13 07:50:17.089958 sshd[1585]: pam_unix(sshd:session): session closed for user core Feb 13 07:50:17.091261 systemd[1]: sshd@4-147.75.90.7:22-139.178.68.195:49360.service: Deactivated successfully. Feb 13 07:50:17.091683 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 07:50:17.092116 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Feb 13 07:50:17.092573 systemd-logind[1459]: Removed session 7. Feb 13 07:50:17.143240 kubelet[1834]: E0213 07:50:17.143133 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.244009 kubelet[1834]: E0213 07:50:17.243890 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.345024 kubelet[1834]: E0213 07:50:17.344794 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.401716 kubelet[1834]: E0213 07:50:17.401615 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:17.446126 kubelet[1834]: E0213 07:50:17.446010 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.547133 kubelet[1834]: E0213 07:50:17.547026 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.648027 kubelet[1834]: E0213 07:50:17.647908 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.749239 kubelet[1834]: E0213 07:50:17.749129 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.850404 kubelet[1834]: E0213 07:50:17.850295 1834 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 13 07:50:17.951886 kubelet[1834]: I0213 07:50:17.951687 1834 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 07:50:17.952523 env[1469]: time="2024-02-13T07:50:17.952428671Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 07:50:17.953354 kubelet[1834]: I0213 07:50:17.953013 1834 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 07:50:18.402192 kubelet[1834]: I0213 07:50:18.402078 1834 apiserver.go:52] "Watching apiserver" Feb 13 07:50:18.402192 kubelet[1834]: E0213 07:50:18.402169 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:18.407759 kubelet[1834]: I0213 07:50:18.407664 1834 topology_manager.go:215] "Topology Admit Handler" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" podNamespace="kube-system" podName="cilium-dp66s" Feb 13 07:50:18.408014 kubelet[1834]: I0213 07:50:18.407949 1834 topology_manager.go:215] "Topology Admit Handler" podUID="fa219967-a390-4f30-81b8-289a643f2164" podNamespace="kube-system" podName="kube-proxy-rrhzw" Feb 13 07:50:18.413002 kubelet[1834]: I0213 07:50:18.412919 1834 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 07:50:18.420327 systemd[1]: Created slice kubepods-besteffort-podfa219967_a390_4f30_81b8_289a643f2164.slice. Feb 13 07:50:18.432081 kubelet[1834]: I0213 07:50:18.432048 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-etc-cni-netd\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432081 kubelet[1834]: I0213 07:50:18.432067 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-xtables-lock\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432081 kubelet[1834]: I0213 07:50:18.432082 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-kernel\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432168 kubelet[1834]: I0213 07:50:18.432110 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-hubble-tls\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432168 kubelet[1834]: I0213 07:50:18.432141 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqhw\" (UniqueName: \"kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-kube-api-access-xlqhw\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432209 kubelet[1834]: I0213 07:50:18.432176 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa219967-a390-4f30-81b8-289a643f2164-xtables-lock\") pod \"kube-proxy-rrhzw\" (UID: \"fa219967-a390-4f30-81b8-289a643f2164\") " pod="kube-system/kube-proxy-rrhzw" Feb 13 07:50:18.432209 kubelet[1834]: I0213 07:50:18.432195 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8jq\" (UniqueName: \"kubernetes.io/projected/fa219967-a390-4f30-81b8-289a643f2164-kube-api-access-4w8jq\") pod \"kube-proxy-rrhzw\" (UID: \"fa219967-a390-4f30-81b8-289a643f2164\") " pod="kube-system/kube-proxy-rrhzw" Feb 13 07:50:18.432247 kubelet[1834]: I0213 07:50:18.432214 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-run\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432266 kubelet[1834]: I0213 07:50:18.432252 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-config-path\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432296 kubelet[1834]: I0213 07:50:18.432285 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-cgroup\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432327 kubelet[1834]: I0213 07:50:18.432308 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-lib-modules\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432327 kubelet[1834]: I0213 07:50:18.432325 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa219967-a390-4f30-81b8-289a643f2164-kube-proxy\") pod \"kube-proxy-rrhzw\" (UID: \"fa219967-a390-4f30-81b8-289a643f2164\") " pod="kube-system/kube-proxy-rrhzw" Feb 13 07:50:18.432367 kubelet[1834]: I0213 07:50:18.432355 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa219967-a390-4f30-81b8-289a643f2164-lib-modules\") pod \"kube-proxy-rrhzw\" (UID: \"fa219967-a390-4f30-81b8-289a643f2164\") " pod="kube-system/kube-proxy-rrhzw" Feb 13 07:50:18.432390 kubelet[1834]: I0213 07:50:18.432376 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-hostproc\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432410 kubelet[1834]: I0213 07:50:18.432392 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cni-path\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432430 kubelet[1834]: I0213 07:50:18.432419 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79b4f201-1a80-417f-a771-7e4c634129c6-clustermesh-secrets\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432451 kubelet[1834]: I0213 07:50:18.432435 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-net\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.432451 kubelet[1834]: I0213 07:50:18.432448 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-bpf-maps\") pod \"cilium-dp66s\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " pod="kube-system/cilium-dp66s" Feb 13 07:50:18.435562 systemd[1]: Created slice kubepods-burstable-pod79b4f201_1a80_417f_a771_7e4c634129c6.slice. Feb 13 07:50:18.736715 env[1469]: time="2024-02-13T07:50:18.736464582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rrhzw,Uid:fa219967-a390-4f30-81b8-289a643f2164,Namespace:kube-system,Attempt:0,}" Feb 13 07:50:18.761918 env[1469]: time="2024-02-13T07:50:18.761820923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dp66s,Uid:79b4f201-1a80-417f-a771-7e4c634129c6,Namespace:kube-system,Attempt:0,}" Feb 13 07:50:19.402671 kubelet[1834]: E0213 07:50:19.402578 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:19.436949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179954646.mount: Deactivated successfully. Feb 13 07:50:19.438679 env[1469]: time="2024-02-13T07:50:19.438658122Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.439455 env[1469]: time="2024-02-13T07:50:19.439411580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.439794 env[1469]: time="2024-02-13T07:50:19.439784581Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.440256 env[1469]: time="2024-02-13T07:50:19.440242611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.441422 env[1469]: time="2024-02-13T07:50:19.441410798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.442706 env[1469]: time="2024-02-13T07:50:19.442694383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.443041 env[1469]: time="2024-02-13T07:50:19.443029063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.443432 env[1469]: time="2024-02-13T07:50:19.443420502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:19.451603 env[1469]: time="2024-02-13T07:50:19.451570036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:50:19.451603 env[1469]: time="2024-02-13T07:50:19.451594280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:50:19.451691 env[1469]: time="2024-02-13T07:50:19.451608044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:50:19.451691 env[1469]: time="2024-02-13T07:50:19.451673649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79 pid=1902 runtime=io.containerd.runc.v2 Feb 13 07:50:19.452404 env[1469]: time="2024-02-13T07:50:19.452265642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:50:19.452404 env[1469]: time="2024-02-13T07:50:19.452287541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:50:19.452449 env[1469]: time="2024-02-13T07:50:19.452402953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:50:19.452481 env[1469]: time="2024-02-13T07:50:19.452464205Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f41569a31f174ef03de30e60c4fc1ac0c14b199d3e89a6c2366de321d2bb72f0 pid=1913 runtime=io.containerd.runc.v2 Feb 13 07:50:19.457988 systemd[1]: Started cri-containerd-6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79.scope. Feb 13 07:50:19.458847 systemd[1]: Started cri-containerd-f41569a31f174ef03de30e60c4fc1ac0c14b199d3e89a6c2366de321d2bb72f0.scope. Feb 13 07:50:19.468996 env[1469]: time="2024-02-13T07:50:19.468962618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rrhzw,Uid:fa219967-a390-4f30-81b8-289a643f2164,Namespace:kube-system,Attempt:0,} returns sandbox id \"f41569a31f174ef03de30e60c4fc1ac0c14b199d3e89a6c2366de321d2bb72f0\"" Feb 13 07:50:19.469112 env[1469]: time="2024-02-13T07:50:19.469097526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dp66s,Uid:79b4f201-1a80-417f-a771-7e4c634129c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\"" Feb 13 07:50:19.469937 env[1469]: time="2024-02-13T07:50:19.469924379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 13 07:50:20.284737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775028288.mount: Deactivated successfully. Feb 13 07:50:20.403621 kubelet[1834]: E0213 07:50:20.403577 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:20.628217 env[1469]: time="2024-02-13T07:50:20.628170225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:20.628860 env[1469]: time="2024-02-13T07:50:20.628818848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:20.629345 env[1469]: time="2024-02-13T07:50:20.629296316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:20.630256 env[1469]: time="2024-02-13T07:50:20.630217197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:20.630403 env[1469]: time="2024-02-13T07:50:20.630364242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 13 07:50:20.630847 env[1469]: time="2024-02-13T07:50:20.630788424Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 07:50:20.631474 env[1469]: time="2024-02-13T07:50:20.631461295Z" level=info msg="CreateContainer within sandbox \"f41569a31f174ef03de30e60c4fc1ac0c14b199d3e89a6c2366de321d2bb72f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 07:50:20.636935 env[1469]: time="2024-02-13T07:50:20.636888818Z" level=info msg="CreateContainer within sandbox \"f41569a31f174ef03de30e60c4fc1ac0c14b199d3e89a6c2366de321d2bb72f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1bcaf0541d009d6421e6f40eac6e05a8e368304b9f71279b2f7f39e723f0e92e\"" Feb 13 07:50:20.637263 env[1469]: time="2024-02-13T07:50:20.637241528Z" level=info msg="StartContainer for \"1bcaf0541d009d6421e6f40eac6e05a8e368304b9f71279b2f7f39e723f0e92e\"" Feb 13 07:50:20.646185 systemd[1]: Started cri-containerd-1bcaf0541d009d6421e6f40eac6e05a8e368304b9f71279b2f7f39e723f0e92e.scope. Feb 13 07:50:20.658647 env[1469]: time="2024-02-13T07:50:20.658606541Z" level=info msg="StartContainer for \"1bcaf0541d009d6421e6f40eac6e05a8e368304b9f71279b2f7f39e723f0e92e\" returns successfully" Feb 13 07:50:21.404764 kubelet[1834]: E0213 07:50:21.404703 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:21.521480 kubelet[1834]: I0213 07:50:21.521436 1834 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rrhzw" podStartSLOduration=4.3605420200000005 podCreationTimestamp="2024-02-13 07:50:16 +0000 UTC" firstStartedPulling="2024-02-13 07:50:19.4696891 +0000 UTC m=+4.403791063" lastFinishedPulling="2024-02-13 07:50:20.630562148 +0000 UTC m=+5.564664107" observedRunningTime="2024-02-13 07:50:21.521335866 +0000 UTC m=+6.455437824" watchObservedRunningTime="2024-02-13 07:50:21.521415064 +0000 UTC m=+6.455517018" Feb 13 07:50:22.405584 kubelet[1834]: E0213 07:50:22.405563 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:23.405924 kubelet[1834]: E0213 07:50:23.405908 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:24.261893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965845929.mount: Deactivated successfully. Feb 13 07:50:24.406332 kubelet[1834]: E0213 07:50:24.406284 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:25.407348 kubelet[1834]: E0213 07:50:25.407299 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:25.914072 env[1469]: time="2024-02-13T07:50:25.914031465Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:25.914577 env[1469]: time="2024-02-13T07:50:25.914519839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:25.915855 env[1469]: time="2024-02-13T07:50:25.915800911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:50:25.916127 env[1469]: time="2024-02-13T07:50:25.916082791Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 07:50:25.917281 env[1469]: time="2024-02-13T07:50:25.917232612Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:50:25.922750 env[1469]: time="2024-02-13T07:50:25.922701222Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\"" Feb 13 07:50:25.923110 env[1469]: time="2024-02-13T07:50:25.923078087Z" level=info msg="StartContainer for \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\"" Feb 13 07:50:25.931872 systemd[1]: Started cri-containerd-f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6.scope. Feb 13 07:50:25.942601 env[1469]: time="2024-02-13T07:50:25.942542672Z" level=info msg="StartContainer for \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\" returns successfully" Feb 13 07:50:25.946931 systemd[1]: cri-containerd-f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6.scope: Deactivated successfully. Feb 13 07:50:26.408595 kubelet[1834]: E0213 07:50:26.408484 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:26.743919 systemd[1]: Started sshd@5-147.75.90.7:22-184.168.31.172:40394.service. Feb 13 07:50:26.924786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6-rootfs.mount: Deactivated successfully. Feb 13 07:50:26.948593 sshd[2194]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:50:27.232844 env[1469]: time="2024-02-13T07:50:27.232726893Z" level=info msg="shim disconnected" id=f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6 Feb 13 07:50:27.232844 env[1469]: time="2024-02-13T07:50:27.232826586Z" level=warning msg="cleaning up after shim disconnected" id=f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6 namespace=k8s.io Feb 13 07:50:27.232844 env[1469]: time="2024-02-13T07:50:27.232855215Z" level=info msg="cleaning up dead shim" Feb 13 07:50:27.241064 env[1469]: time="2024-02-13T07:50:27.241044788Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:50:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2196 runtime=io.containerd.runc.v2\n" Feb 13 07:50:27.409361 kubelet[1834]: E0213 07:50:27.409255 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:27.544535 env[1469]: time="2024-02-13T07:50:27.544321414Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:50:27.560675 env[1469]: time="2024-02-13T07:50:27.560528797Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\"" Feb 13 07:50:27.561539 env[1469]: time="2024-02-13T07:50:27.561437261Z" level=info msg="StartContainer for \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\"" Feb 13 07:50:27.585352 systemd[1]: Started cri-containerd-89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0.scope. Feb 13 07:50:27.597080 env[1469]: time="2024-02-13T07:50:27.597034493Z" level=info msg="StartContainer for \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\" returns successfully" Feb 13 07:50:27.603293 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:50:27.603431 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:50:27.603516 systemd[1]: Stopping systemd-sysctl.service... Feb 13 07:50:27.604553 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:50:27.604751 systemd[1]: cri-containerd-89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0.scope: Deactivated successfully. Feb 13 07:50:27.608344 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:50:27.614874 env[1469]: time="2024-02-13T07:50:27.614827467Z" level=info msg="shim disconnected" id=89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0 Feb 13 07:50:27.614874 env[1469]: time="2024-02-13T07:50:27.614868392Z" level=warning msg="cleaning up after shim disconnected" id=89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0 namespace=k8s.io Feb 13 07:50:27.614874 env[1469]: time="2024-02-13T07:50:27.614874223Z" level=info msg="cleaning up dead shim" Feb 13 07:50:27.618488 env[1469]: time="2024-02-13T07:50:27.618451567Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:50:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2259 runtime=io.containerd.runc.v2\n" Feb 13 07:50:27.924241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0-rootfs.mount: Deactivated successfully. Feb 13 07:50:28.409510 kubelet[1834]: E0213 07:50:28.409393 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:28.523685 sshd[2194]: Failed password for root from 184.168.31.172 port 40394 ssh2 Feb 13 07:50:28.540162 env[1469]: time="2024-02-13T07:50:28.540029641Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:50:28.551973 env[1469]: time="2024-02-13T07:50:28.551925784Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\"" Feb 13 07:50:28.552138 env[1469]: time="2024-02-13T07:50:28.552120144Z" level=info msg="StartContainer for \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\"" Feb 13 07:50:28.560945 systemd[1]: Started cri-containerd-e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37.scope. Feb 13 07:50:28.573627 env[1469]: time="2024-02-13T07:50:28.573601104Z" level=info msg="StartContainer for \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\" returns successfully" Feb 13 07:50:28.575248 systemd[1]: cri-containerd-e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37.scope: Deactivated successfully. Feb 13 07:50:28.599097 env[1469]: time="2024-02-13T07:50:28.599053361Z" level=info msg="shim disconnected" id=e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37 Feb 13 07:50:28.599214 env[1469]: time="2024-02-13T07:50:28.599098840Z" level=warning msg="cleaning up after shim disconnected" id=e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37 namespace=k8s.io Feb 13 07:50:28.599214 env[1469]: time="2024-02-13T07:50:28.599112770Z" level=info msg="cleaning up dead shim" Feb 13 07:50:28.603683 env[1469]: time="2024-02-13T07:50:28.603657531Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:50:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2314 runtime=io.containerd.runc.v2\n" Feb 13 07:50:28.924319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37-rootfs.mount: Deactivated successfully. Feb 13 07:50:29.410230 kubelet[1834]: E0213 07:50:29.410111 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:29.547830 env[1469]: time="2024-02-13T07:50:29.547693509Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:50:29.562808 env[1469]: time="2024-02-13T07:50:29.562791636Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\"" Feb 13 07:50:29.563084 env[1469]: time="2024-02-13T07:50:29.563071936Z" level=info msg="StartContainer for \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\"" Feb 13 07:50:29.571584 systemd[1]: Started cri-containerd-225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a.scope. Feb 13 07:50:29.582279 env[1469]: time="2024-02-13T07:50:29.582253655Z" level=info msg="StartContainer for \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\" returns successfully" Feb 13 07:50:29.582544 systemd[1]: cri-containerd-225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a.scope: Deactivated successfully. Feb 13 07:50:29.608955 env[1469]: time="2024-02-13T07:50:29.608924749Z" level=info msg="shim disconnected" id=225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a Feb 13 07:50:29.609067 env[1469]: time="2024-02-13T07:50:29.608956303Z" level=warning msg="cleaning up after shim disconnected" id=225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a namespace=k8s.io Feb 13 07:50:29.609067 env[1469]: time="2024-02-13T07:50:29.608964704Z" level=info msg="cleaning up dead shim" Feb 13 07:50:29.613294 env[1469]: time="2024-02-13T07:50:29.613270948Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:50:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2367 runtime=io.containerd.runc.v2\n" Feb 13 07:50:29.923137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a-rootfs.mount: Deactivated successfully. Feb 13 07:50:30.050318 sshd[2194]: Received disconnect from 184.168.31.172 port 40394:11: Bye Bye [preauth] Feb 13 07:50:30.050318 sshd[2194]: Disconnected from authenticating user root 184.168.31.172 port 40394 [preauth] Feb 13 07:50:30.052968 systemd[1]: sshd@5-147.75.90.7:22-184.168.31.172:40394.service: Deactivated successfully. Feb 13 07:50:30.411357 kubelet[1834]: E0213 07:50:30.411250 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:30.542413 systemd[1]: Started sshd@6-147.75.90.7:22-60.164.242.224:33636.service. Feb 13 07:50:30.546892 env[1469]: time="2024-02-13T07:50:30.546870172Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:50:30.552827 env[1469]: time="2024-02-13T07:50:30.552781772Z" level=info msg="CreateContainer within sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\"" Feb 13 07:50:30.553096 env[1469]: time="2024-02-13T07:50:30.553078162Z" level=info msg="StartContainer for \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\"" Feb 13 07:50:30.561418 systemd[1]: Started cri-containerd-83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0.scope. Feb 13 07:50:30.574359 env[1469]: time="2024-02-13T07:50:30.574302695Z" level=info msg="StartContainer for \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\" returns successfully" Feb 13 07:50:30.620140 kubelet[1834]: I0213 07:50:30.620124 1834 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 07:50:30.629601 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:50:30.767572 kernel: Initializing XFRM netlink socket Feb 13 07:50:30.780622 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:50:31.412535 kubelet[1834]: E0213 07:50:31.412417 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:31.513114 sshd[2382]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=60.164.242.224 user=root Feb 13 07:50:31.578362 kubelet[1834]: I0213 07:50:31.578257 1834 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dp66s" podStartSLOduration=9.131660966 podCreationTimestamp="2024-02-13 07:50:16 +0000 UTC" firstStartedPulling="2024-02-13 07:50:19.469730227 +0000 UTC m=+4.403832186" lastFinishedPulling="2024-02-13 07:50:25.916233978 +0000 UTC m=+10.850335940" observedRunningTime="2024-02-13 07:50:31.578061887 +0000 UTC m=+16.512163921" watchObservedRunningTime="2024-02-13 07:50:31.57816472 +0000 UTC m=+16.512266772" Feb 13 07:50:32.373284 systemd-networkd[1313]: cilium_host: Link UP Feb 13 07:50:32.373428 systemd-networkd[1313]: cilium_net: Link UP Feb 13 07:50:32.373431 systemd-networkd[1313]: cilium_net: Gained carrier Feb 13 07:50:32.374274 systemd-networkd[1313]: cilium_host: Gained carrier Feb 13 07:50:32.381573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 07:50:32.381977 systemd-networkd[1313]: cilium_host: Gained IPv6LL Feb 13 07:50:32.412912 kubelet[1834]: E0213 07:50:32.412892 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:32.426357 systemd-networkd[1313]: cilium_vxlan: Link UP Feb 13 07:50:32.426362 systemd-networkd[1313]: cilium_vxlan: Gained carrier Feb 13 07:50:32.600624 kernel: NET: Registered PF_ALG protocol family Feb 13 07:50:33.119599 systemd-networkd[1313]: lxc_health: Link UP Feb 13 07:50:33.142573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:50:33.142644 systemd-networkd[1313]: lxc_health: Gained carrier Feb 13 07:50:33.154693 systemd-networkd[1313]: cilium_net: Gained IPv6LL Feb 13 07:50:33.332144 kubelet[1834]: I0213 07:50:33.332127 1834 topology_manager.go:215] "Topology Admit Handler" podUID="aa7ed66b-3374-49ab-a778-711275ce70ea" podNamespace="default" podName="nginx-deployment-6d5f899847-rsz8p" Feb 13 07:50:33.335093 systemd[1]: Created slice kubepods-besteffort-podaa7ed66b_3374_49ab_a778_711275ce70ea.slice. Feb 13 07:50:33.413200 kubelet[1834]: E0213 07:50:33.413158 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:33.436441 kubelet[1834]: I0213 07:50:33.436400 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9np\" (UniqueName: \"kubernetes.io/projected/aa7ed66b-3374-49ab-a778-711275ce70ea-kube-api-access-rf9np\") pod \"nginx-deployment-6d5f899847-rsz8p\" (UID: \"aa7ed66b-3374-49ab-a778-711275ce70ea\") " pod="default/nginx-deployment-6d5f899847-rsz8p" Feb 13 07:50:33.438714 sshd[2382]: Failed password for root from 60.164.242.224 port 33636 ssh2 Feb 13 07:50:33.636693 env[1469]: time="2024-02-13T07:50:33.636639995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rsz8p,Uid:aa7ed66b-3374-49ab-a778-711275ce70ea,Namespace:default,Attempt:0,}" Feb 13 07:50:33.670900 systemd-networkd[1313]: lxcd4ae0d1d5b2a: Link UP Feb 13 07:50:33.694582 kernel: eth0: renamed from tmp7bce6 Feb 13 07:50:33.737000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:50:33.737039 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd4ae0d1d5b2a: link becomes ready Feb 13 07:50:33.737048 systemd-networkd[1313]: lxcd4ae0d1d5b2a: Gained carrier Feb 13 07:50:33.985693 systemd-networkd[1313]: cilium_vxlan: Gained IPv6LL Feb 13 07:50:34.305676 systemd-networkd[1313]: lxc_health: Gained IPv6LL Feb 13 07:50:34.414083 kubelet[1834]: E0213 07:50:34.414060 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:34.771809 sshd[2382]: Received disconnect from 60.164.242.224 port 33636:11: Bye Bye [preauth] Feb 13 07:50:34.771809 sshd[2382]: Disconnected from authenticating user root 60.164.242.224 port 33636 [preauth] Feb 13 07:50:34.772416 systemd[1]: sshd@6-147.75.90.7:22-60.164.242.224:33636.service: Deactivated successfully. Feb 13 07:50:35.400981 kubelet[1834]: E0213 07:50:35.400931 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:35.415161 kubelet[1834]: E0213 07:50:35.415119 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:35.649744 systemd-networkd[1313]: lxcd4ae0d1d5b2a: Gained IPv6LL Feb 13 07:50:36.064481 env[1469]: time="2024-02-13T07:50:36.064449863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:50:36.064481 env[1469]: time="2024-02-13T07:50:36.064470616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:50:36.064481 env[1469]: time="2024-02-13T07:50:36.064477495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:50:36.064721 env[1469]: time="2024-02-13T07:50:36.064541018Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bce6abc51afb3991e18eebe4d60c29905b5f7f89c370d63e9ed4a405b61d2aa pid=3011 runtime=io.containerd.runc.v2 Feb 13 07:50:36.070238 systemd[1]: Started cri-containerd-7bce6abc51afb3991e18eebe4d60c29905b5f7f89c370d63e9ed4a405b61d2aa.scope. Feb 13 07:50:36.091870 env[1469]: time="2024-02-13T07:50:36.091805423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rsz8p,Uid:aa7ed66b-3374-49ab-a778-711275ce70ea,Namespace:default,Attempt:0,} returns sandbox id \"7bce6abc51afb3991e18eebe4d60c29905b5f7f89c370d63e9ed4a405b61d2aa\"" Feb 13 07:50:36.092518 env[1469]: time="2024-02-13T07:50:36.092475931Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 07:50:36.415387 kubelet[1834]: E0213 07:50:36.415246 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:37.416275 kubelet[1834]: E0213 07:50:37.416164 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:38.417138 kubelet[1834]: E0213 07:50:38.417017 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:39.417961 kubelet[1834]: E0213 07:50:39.417849 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:40.418317 kubelet[1834]: E0213 07:50:40.418201 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:41.419257 kubelet[1834]: E0213 07:50:41.419150 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:42.420317 kubelet[1834]: E0213 07:50:42.420208 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:43.420825 kubelet[1834]: E0213 07:50:43.420715 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:44.421960 kubelet[1834]: E0213 07:50:44.421846 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:44.823108 update_engine[1461]: I0213 07:50:44.822880 1461 update_attempter.cc:509] Updating boot flags... Feb 13 07:50:45.422519 kubelet[1834]: E0213 07:50:45.422411 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:46.423156 kubelet[1834]: E0213 07:50:46.423051 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:47.424401 kubelet[1834]: E0213 07:50:47.424292 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:48.425487 kubelet[1834]: E0213 07:50:48.425377 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:49.426572 kubelet[1834]: E0213 07:50:49.426441 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:50.427618 kubelet[1834]: E0213 07:50:50.427511 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:50.446964 systemd[1]: Started sshd@7-147.75.90.7:22-129.226.4.248:34404.service. Feb 13 07:50:50.853088 systemd[1]: Started sshd@8-147.75.90.7:22-103.147.242.96:35442.service. Feb 13 07:50:51.427820 kubelet[1834]: E0213 07:50:51.427693 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:51.448709 sshd[3063]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=129.226.4.248 user=root Feb 13 07:50:52.053137 sshd[3068]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.147.242.96 user=root Feb 13 07:50:52.428026 kubelet[1834]: E0213 07:50:52.427921 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:52.788156 sshd[3063]: Failed password for root from 129.226.4.248 port 34404 ssh2 Feb 13 07:50:53.164290 sshd[3063]: Received disconnect from 129.226.4.248 port 34404:11: Bye Bye [preauth] Feb 13 07:50:53.164290 sshd[3063]: Disconnected from authenticating user root 129.226.4.248 port 34404 [preauth] Feb 13 07:50:53.166771 systemd[1]: sshd@7-147.75.90.7:22-129.226.4.248:34404.service: Deactivated successfully. Feb 13 07:50:53.428875 kubelet[1834]: E0213 07:50:53.428654 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:53.528175 sshd[3068]: Failed password for root from 103.147.242.96 port 35442 ssh2 Feb 13 07:50:53.809345 sshd[3068]: Received disconnect from 103.147.242.96 port 35442:11: Bye Bye [preauth] Feb 13 07:50:53.809345 sshd[3068]: Disconnected from authenticating user root 103.147.242.96 port 35442 [preauth] Feb 13 07:50:53.811726 systemd[1]: sshd@8-147.75.90.7:22-103.147.242.96:35442.service: Deactivated successfully. Feb 13 07:50:54.429977 kubelet[1834]: E0213 07:50:54.429871 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:55.400875 kubelet[1834]: E0213 07:50:55.400761 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:55.431174 kubelet[1834]: E0213 07:50:55.431062 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:56.431979 kubelet[1834]: E0213 07:50:56.431867 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:57.432662 kubelet[1834]: E0213 07:50:57.432538 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:58.433381 kubelet[1834]: E0213 07:50:58.433268 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:58.973260 systemd[1]: Started sshd@9-147.75.90.7:22-128.199.168.119:45100.service. Feb 13 07:50:59.434267 kubelet[1834]: E0213 07:50:59.434200 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:50:59.944149 sshd[3073]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=128.199.168.119 user=root Feb 13 07:50:59.944386 sshd[3073]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 07:51:00.435104 kubelet[1834]: E0213 07:51:00.434995 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:01.435726 kubelet[1834]: E0213 07:51:01.435613 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:01.715438 sshd[3073]: Failed password for root from 128.199.168.119 port 45100 ssh2 Feb 13 07:51:02.436699 kubelet[1834]: E0213 07:51:02.436587 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:03.197235 sshd[3073]: Received disconnect from 128.199.168.119 port 45100:11: Bye Bye [preauth] Feb 13 07:51:03.197235 sshd[3073]: Disconnected from authenticating user root 128.199.168.119 port 45100 [preauth] Feb 13 07:51:03.199797 systemd[1]: sshd@9-147.75.90.7:22-128.199.168.119:45100.service: Deactivated successfully. Feb 13 07:51:03.437778 kubelet[1834]: E0213 07:51:03.437675 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:04.438782 kubelet[1834]: E0213 07:51:04.438673 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:05.439973 kubelet[1834]: E0213 07:51:05.439862 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:06.440985 kubelet[1834]: E0213 07:51:06.440876 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:06.879802 systemd[1]: Started sshd@10-147.75.90.7:22-60.164.242.224:40402.service. Feb 13 07:51:07.441801 kubelet[1834]: E0213 07:51:07.441679 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:07.844633 sshd[3077]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=60.164.242.224 user=root Feb 13 07:51:08.442965 kubelet[1834]: E0213 07:51:08.442885 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:09.444266 kubelet[1834]: E0213 07:51:09.444143 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:10.047067 sshd[3077]: Failed password for root from 60.164.242.224 port 40402 ssh2 Feb 13 07:51:10.445129 kubelet[1834]: E0213 07:51:10.445019 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:11.103720 sshd[3077]: Received disconnect from 60.164.242.224 port 40402:11: Bye Bye [preauth] Feb 13 07:51:11.103720 sshd[3077]: Disconnected from authenticating user root 60.164.242.224 port 40402 [preauth] Feb 13 07:51:11.106198 systemd[1]: sshd@10-147.75.90.7:22-60.164.242.224:40402.service: Deactivated successfully. Feb 13 07:51:11.445883 kubelet[1834]: E0213 07:51:11.445770 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:12.446740 kubelet[1834]: E0213 07:51:12.446631 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:13.447924 kubelet[1834]: E0213 07:51:13.447817 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:14.448162 kubelet[1834]: E0213 07:51:14.448060 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:15.401406 kubelet[1834]: E0213 07:51:15.401318 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:15.448971 kubelet[1834]: E0213 07:51:15.448859 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:16.449226 kubelet[1834]: E0213 07:51:16.449099 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:17.450402 kubelet[1834]: E0213 07:51:17.450290 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:18.451652 kubelet[1834]: E0213 07:51:18.451525 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:19.452480 kubelet[1834]: E0213 07:51:19.452424 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:20.453723 kubelet[1834]: E0213 07:51:20.453616 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:21.454350 kubelet[1834]: E0213 07:51:21.454240 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:22.455412 kubelet[1834]: E0213 07:51:22.455298 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:22.866528 systemd[1]: Started sshd@11-147.75.90.7:22-184.168.31.172:59036.service. Feb 13 07:51:23.054610 sshd[3085]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:51:23.456544 kubelet[1834]: E0213 07:51:23.456426 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:24.456817 kubelet[1834]: E0213 07:51:24.456709 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:25.121622 sshd[3085]: Failed password for root from 184.168.31.172 port 59036 ssh2 Feb 13 07:51:25.457921 kubelet[1834]: E0213 07:51:25.457699 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:26.151624 sshd[3085]: Received disconnect from 184.168.31.172 port 59036:11: Bye Bye [preauth] Feb 13 07:51:26.151624 sshd[3085]: Disconnected from authenticating user root 184.168.31.172 port 59036 [preauth] Feb 13 07:51:26.154094 systemd[1]: sshd@11-147.75.90.7:22-184.168.31.172:59036.service: Deactivated successfully. Feb 13 07:51:26.458440 kubelet[1834]: E0213 07:51:26.458166 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:27.458617 kubelet[1834]: E0213 07:51:27.458497 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:28.459736 kubelet[1834]: E0213 07:51:28.459622 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:29.460339 kubelet[1834]: E0213 07:51:29.460229 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:30.461155 kubelet[1834]: E0213 07:51:30.461050 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:31.461618 kubelet[1834]: E0213 07:51:31.461505 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:32.462639 kubelet[1834]: E0213 07:51:32.462521 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:33.462821 kubelet[1834]: E0213 07:51:33.462699 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:34.463294 kubelet[1834]: E0213 07:51:34.463185 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:35.401434 kubelet[1834]: E0213 07:51:35.401326 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:35.464596 kubelet[1834]: E0213 07:51:35.464459 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:36.465857 kubelet[1834]: E0213 07:51:36.465744 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:37.466466 kubelet[1834]: E0213 07:51:37.466354 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:38.467047 kubelet[1834]: E0213 07:51:38.466951 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:39.468006 kubelet[1834]: E0213 07:51:39.467897 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:40.468295 kubelet[1834]: E0213 07:51:40.468186 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:41.468765 kubelet[1834]: E0213 07:51:41.468655 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:42.469967 kubelet[1834]: E0213 07:51:42.469849 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:43.470434 kubelet[1834]: E0213 07:51:43.470327 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:44.471597 kubelet[1834]: E0213 07:51:44.471477 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:45.472528 kubelet[1834]: E0213 07:51:45.472468 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:46.473052 kubelet[1834]: E0213 07:51:46.472941 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:47.474259 kubelet[1834]: E0213 07:51:47.474150 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:48.474905 kubelet[1834]: E0213 07:51:48.474796 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:49.475709 kubelet[1834]: E0213 07:51:49.475585 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:50.476478 kubelet[1834]: E0213 07:51:50.476372 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:51.477417 kubelet[1834]: E0213 07:51:51.477312 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:52.478519 kubelet[1834]: E0213 07:51:52.478401 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:53.479270 kubelet[1834]: E0213 07:51:53.479149 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:54.480255 kubelet[1834]: E0213 07:51:54.480129 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:55.401086 kubelet[1834]: E0213 07:51:55.400977 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:55.481221 kubelet[1834]: E0213 07:51:55.481109 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:56.482305 kubelet[1834]: E0213 07:51:56.482197 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:57.482996 kubelet[1834]: E0213 07:51:57.482884 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:58.483994 kubelet[1834]: E0213 07:51:58.483877 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:59.484230 kubelet[1834]: E0213 07:51:59.484119 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:51:59.889870 update_engine[1461]: I0213 07:51:59.889750 1461 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 07:51:59.889870 update_engine[1461]: I0213 07:51:59.889832 1461 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 07:51:59.890866 update_engine[1461]: I0213 07:51:59.890525 1461 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 07:51:59.891462 update_engine[1461]: I0213 07:51:59.891382 1461 omaha_request_params.cc:62] Current group set to lts Feb 13 07:51:59.891718 update_engine[1461]: I0213 07:51:59.891701 1461 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 07:51:59.891846 update_engine[1461]: I0213 07:51:59.891721 1461 update_attempter.cc:643] Scheduling an action processor start. Feb 13 07:51:59.891846 update_engine[1461]: I0213 07:51:59.891754 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 07:51:59.891846 update_engine[1461]: I0213 07:51:59.891817 1461 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 07:51:59.892119 update_engine[1461]: I0213 07:51:59.891954 1461 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 07:51:59.892119 update_engine[1461]: I0213 07:51:59.891971 1461 omaha_request_action.cc:271] Request: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: Feb 13 07:51:59.892119 update_engine[1461]: I0213 07:51:59.891983 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:51:59.893222 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 07:51:59.895331 update_engine[1461]: I0213 07:51:59.895248 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:51:59.895543 update_engine[1461]: E0213 07:51:59.895480 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:51:59.895713 update_engine[1461]: I0213 07:51:59.895664 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 07:52:00.105469 systemd[1]: Started sshd@12-147.75.90.7:22-103.147.242.96:44955.service. Feb 13 07:52:00.476674 systemd[1]: Started sshd@13-147.75.90.7:22-128.199.168.119:35332.service. Feb 13 07:52:00.484228 kubelet[1834]: E0213 07:52:00.484184 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:01.308977 sshd[3093]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.147.242.96 user=root Feb 13 07:52:01.485279 kubelet[1834]: E0213 07:52:01.485199 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:01.812220 sshd[3096]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=128.199.168.119 user=root Feb 13 07:52:02.485724 kubelet[1834]: E0213 07:52:02.485643 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:03.486274 kubelet[1834]: E0213 07:52:03.486199 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:03.591877 sshd[3093]: Failed password for root from 103.147.242.96 port 44955 ssh2 Feb 13 07:52:04.095151 sshd[3096]: Failed password for root from 128.199.168.119 port 35332 ssh2 Feb 13 07:52:04.486825 kubelet[1834]: E0213 07:52:04.486746 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:04.608396 sshd[3093]: Received disconnect from 103.147.242.96 port 44955:11: Bye Bye [preauth] Feb 13 07:52:04.608396 sshd[3093]: Disconnected from authenticating user root 103.147.242.96 port 44955 [preauth] Feb 13 07:52:04.611066 systemd[1]: sshd@12-147.75.90.7:22-103.147.242.96:44955.service: Deactivated successfully. Feb 13 07:52:05.140269 sshd[3096]: Received disconnect from 128.199.168.119 port 35332:11: Bye Bye [preauth] Feb 13 07:52:05.140269 sshd[3096]: Disconnected from authenticating user root 128.199.168.119 port 35332 [preauth] Feb 13 07:52:05.142755 systemd[1]: sshd@13-147.75.90.7:22-128.199.168.119:35332.service: Deactivated successfully. Feb 13 07:52:05.487168 kubelet[1834]: E0213 07:52:05.486981 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:06.488245 kubelet[1834]: E0213 07:52:06.488164 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:07.489366 kubelet[1834]: E0213 07:52:07.489288 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:08.489597 kubelet[1834]: E0213 07:52:08.489494 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:09.490730 kubelet[1834]: E0213 07:52:09.490630 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:09.814961 update_engine[1461]: I0213 07:52:09.814720 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:52:09.815845 update_engine[1461]: I0213 07:52:09.815212 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:52:09.815845 update_engine[1461]: E0213 07:52:09.815424 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:52:09.815845 update_engine[1461]: I0213 07:52:09.815711 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 07:52:10.491458 kubelet[1834]: E0213 07:52:10.491353 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:11.492424 kubelet[1834]: E0213 07:52:11.492304 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:12.493532 kubelet[1834]: E0213 07:52:12.493423 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:13.493926 kubelet[1834]: E0213 07:52:13.493809 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:14.494314 kubelet[1834]: E0213 07:52:14.494205 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:15.400953 kubelet[1834]: E0213 07:52:15.400841 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:15.494954 kubelet[1834]: E0213 07:52:15.494848 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:16.495777 kubelet[1834]: E0213 07:52:16.495667 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:17.496666 kubelet[1834]: E0213 07:52:17.496582 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:18.497213 kubelet[1834]: E0213 07:52:18.497097 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:19.497956 kubelet[1834]: E0213 07:52:19.497851 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:19.608809 systemd[1]: Started sshd@14-147.75.90.7:22-129.226.4.248:39020.service. Feb 13 07:52:19.824323 update_engine[1461]: I0213 07:52:19.824079 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:52:19.825102 update_engine[1461]: I0213 07:52:19.824594 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:52:19.825102 update_engine[1461]: E0213 07:52:19.824818 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:52:19.825102 update_engine[1461]: I0213 07:52:19.825089 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 07:52:20.499008 kubelet[1834]: E0213 07:52:20.498893 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:20.613489 sshd[3103]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=129.226.4.248 user=root Feb 13 07:52:21.001099 systemd[1]: Started sshd@15-147.75.90.7:22-184.168.31.172:49294.service. Feb 13 07:52:21.142420 systemd[1]: Started sshd@16-147.75.90.7:22-185.11.61.88:18132.service. Feb 13 07:52:21.227195 sshd[3108]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:52:21.499303 kubelet[1834]: E0213 07:52:21.499240 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:22.104367 sshd[3111]: Invalid user user1 from 185.11.61.88 port 18132 Feb 13 07:52:22.110675 sshd[3111]: pam_faillock(sshd:auth): User unknown Feb 13 07:52:22.111670 sshd[3111]: pam_unix(sshd:auth): check pass; user unknown Feb 13 07:52:22.111758 sshd[3111]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.11.61.88 Feb 13 07:52:22.112719 sshd[3111]: pam_faillock(sshd:auth): User unknown Feb 13 07:52:22.499610 kubelet[1834]: E0213 07:52:22.499478 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:22.504786 sshd[3103]: Failed password for root from 129.226.4.248 port 39020 ssh2 Feb 13 07:52:23.254392 sshd[3108]: Failed password for root from 184.168.31.172 port 49294 ssh2 Feb 13 07:52:23.499829 kubelet[1834]: E0213 07:52:23.499727 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:23.872719 sshd[3103]: Received disconnect from 129.226.4.248 port 39020:11: Bye Bye [preauth] Feb 13 07:52:23.872719 sshd[3103]: Disconnected from authenticating user root 129.226.4.248 port 39020 [preauth] Feb 13 07:52:23.875229 systemd[1]: sshd@14-147.75.90.7:22-129.226.4.248:39020.service: Deactivated successfully. Feb 13 07:52:23.944069 sshd[3111]: Failed password for invalid user user1 from 185.11.61.88 port 18132 ssh2 Feb 13 07:52:24.331155 sshd[3108]: Received disconnect from 184.168.31.172 port 49294:11: Bye Bye [preauth] Feb 13 07:52:24.331155 sshd[3108]: Disconnected from authenticating user root 184.168.31.172 port 49294 [preauth] Feb 13 07:52:24.333511 systemd[1]: sshd@15-147.75.90.7:22-184.168.31.172:49294.service: Deactivated successfully. Feb 13 07:52:24.500869 kubelet[1834]: E0213 07:52:24.500724 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:25.501452 kubelet[1834]: E0213 07:52:25.501349 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:25.596730 sshd[3111]: Received disconnect from 185.11.61.88 port 18132:11: Client disconnecting normally [preauth] Feb 13 07:52:25.596730 sshd[3111]: Disconnected from invalid user user1 185.11.61.88 port 18132 [preauth] Feb 13 07:52:25.599302 systemd[1]: sshd@16-147.75.90.7:22-185.11.61.88:18132.service: Deactivated successfully. Feb 13 07:52:26.501672 kubelet[1834]: E0213 07:52:26.501547 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:27.502038 kubelet[1834]: E0213 07:52:27.501970 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:28.502648 kubelet[1834]: E0213 07:52:28.502577 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:29.502792 kubelet[1834]: E0213 07:52:29.502729 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:29.824736 update_engine[1461]: I0213 07:52:29.824445 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:52:29.825616 update_engine[1461]: I0213 07:52:29.824959 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:52:29.825616 update_engine[1461]: E0213 07:52:29.825177 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:52:29.825616 update_engine[1461]: I0213 07:52:29.825417 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 07:52:29.825616 update_engine[1461]: I0213 07:52:29.825435 1461 omaha_request_action.cc:621] Omaha request response: Feb 13 07:52:29.825616 update_engine[1461]: E0213 07:52:29.825603 1461 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825635 1461 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825645 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825654 1461 update_attempter.cc:306] Processing Done. Feb 13 07:52:29.826095 update_engine[1461]: E0213 07:52:29.825680 1461 update_attempter.cc:619] Update failed. Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825690 1461 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825699 1461 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825708 1461 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825861 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825913 1461 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825923 1461 omaha_request_action.cc:271] Request: Feb 13 07:52:29.826095 update_engine[1461]: Feb 13 07:52:29.826095 update_engine[1461]: Feb 13 07:52:29.826095 update_engine[1461]: Feb 13 07:52:29.826095 update_engine[1461]: Feb 13 07:52:29.826095 update_engine[1461]: Feb 13 07:52:29.826095 update_engine[1461]: Feb 13 07:52:29.826095 update_engine[1461]: I0213 07:52:29.825933 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826257 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:52:29.827179 update_engine[1461]: E0213 07:52:29.826427 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826581 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826597 1461 omaha_request_action.cc:621] Omaha request response: Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826608 1461 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826617 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826623 1461 update_attempter.cc:306] Processing Done. Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826630 1461 update_attempter.cc:310] Error event sent. Feb 13 07:52:29.827179 update_engine[1461]: I0213 07:52:29.826658 1461 update_check_scheduler.cc:74] Next update check in 43m3s Feb 13 07:52:29.827323 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 07:52:29.827323 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 07:52:30.503266 kubelet[1834]: E0213 07:52:30.503174 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:31.503590 kubelet[1834]: E0213 07:52:31.503546 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:32.504355 kubelet[1834]: E0213 07:52:32.504244 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:33.505368 kubelet[1834]: E0213 07:52:33.505323 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:34.505937 kubelet[1834]: E0213 07:52:34.505855 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:35.401277 kubelet[1834]: E0213 07:52:35.401202 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:35.506446 kubelet[1834]: E0213 07:52:35.506345 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:36.506698 kubelet[1834]: E0213 07:52:36.506589 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:37.507107 kubelet[1834]: E0213 07:52:37.507007 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:38.507372 kubelet[1834]: E0213 07:52:38.507250 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:39.508455 kubelet[1834]: E0213 07:52:39.508354 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:40.509663 kubelet[1834]: E0213 07:52:40.509538 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:41.510851 kubelet[1834]: E0213 07:52:41.510743 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:42.511294 kubelet[1834]: E0213 07:52:42.511173 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:43.511711 kubelet[1834]: E0213 07:52:43.511603 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:44.512165 kubelet[1834]: E0213 07:52:44.512052 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:45.513010 kubelet[1834]: E0213 07:52:45.512887 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:46.513170 kubelet[1834]: E0213 07:52:46.513056 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:47.514393 kubelet[1834]: E0213 07:52:47.514286 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:48.515403 kubelet[1834]: E0213 07:52:48.515294 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:49.516148 kubelet[1834]: E0213 07:52:49.516026 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:50.517210 kubelet[1834]: E0213 07:52:50.517132 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:51.517869 kubelet[1834]: E0213 07:52:51.517760 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:52.518966 kubelet[1834]: E0213 07:52:52.518858 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:53.519926 kubelet[1834]: E0213 07:52:53.519853 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:54.520920 kubelet[1834]: E0213 07:52:54.520807 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:55.400684 kubelet[1834]: E0213 07:52:55.400555 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:55.521815 kubelet[1834]: E0213 07:52:55.521710 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:56.522660 kubelet[1834]: E0213 07:52:56.522445 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:57.523254 kubelet[1834]: E0213 07:52:57.523227 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:58.524065 kubelet[1834]: E0213 07:52:58.523986 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:52:59.525010 kubelet[1834]: E0213 07:52:59.524927 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:00.525300 kubelet[1834]: E0213 07:53:00.525221 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:01.526417 kubelet[1834]: E0213 07:53:01.526342 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:02.527187 kubelet[1834]: E0213 07:53:02.527069 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:03.527741 kubelet[1834]: E0213 07:53:03.527663 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:03.600706 systemd[1]: Started sshd@17-147.75.90.7:22-128.199.168.119:53816.service. Feb 13 07:53:04.528996 kubelet[1834]: E0213 07:53:04.528889 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:04.642809 sshd[3119]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=128.199.168.119 user=root Feb 13 07:53:05.530019 kubelet[1834]: E0213 07:53:05.529914 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:06.530417 kubelet[1834]: E0213 07:53:06.530339 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:06.574371 sshd[3119]: Failed password for root from 128.199.168.119 port 53816 ssh2 Feb 13 07:53:07.530861 kubelet[1834]: E0213 07:53:07.530783 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:07.909870 sshd[3119]: Received disconnect from 128.199.168.119 port 53816:11: Bye Bye [preauth] Feb 13 07:53:07.909870 sshd[3119]: Disconnected from authenticating user root 128.199.168.119 port 53816 [preauth] Feb 13 07:53:07.912412 systemd[1]: sshd@17-147.75.90.7:22-128.199.168.119:53816.service: Deactivated successfully. Feb 13 07:53:08.531517 kubelet[1834]: E0213 07:53:08.531409 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:09.532431 kubelet[1834]: E0213 07:53:09.532361 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:10.532804 kubelet[1834]: E0213 07:53:10.532692 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:11.533744 kubelet[1834]: E0213 07:53:11.533632 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:12.534850 kubelet[1834]: E0213 07:53:12.534739 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:12.882520 systemd[1]: Started sshd@18-147.75.90.7:22-103.147.242.96:54488.service. Feb 13 07:53:13.535309 kubelet[1834]: E0213 07:53:13.535281 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:14.432323 sshd[3123]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.147.242.96 user=root Feb 13 07:53:14.536510 kubelet[1834]: E0213 07:53:14.536440 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:15.401375 kubelet[1834]: E0213 07:53:15.401268 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:15.537828 kubelet[1834]: E0213 07:53:15.537714 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:15.656317 systemd[1]: Started sshd@19-147.75.90.7:22-184.168.31.172:39966.service. Feb 13 07:53:15.862210 sshd[3128]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:53:16.404021 sshd[3123]: Failed password for root from 103.147.242.96 port 54488 ssh2 Feb 13 07:53:16.538062 kubelet[1834]: E0213 07:53:16.537985 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:17.538858 kubelet[1834]: E0213 07:53:17.538748 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:17.799836 sshd[3123]: Received disconnect from 103.147.242.96 port 54488:11: Bye Bye [preauth] Feb 13 07:53:17.799836 sshd[3123]: Disconnected from authenticating user root 103.147.242.96 port 54488 [preauth] Feb 13 07:53:17.802277 systemd[1]: sshd@18-147.75.90.7:22-103.147.242.96:54488.service: Deactivated successfully. Feb 13 07:53:17.968959 sshd[3128]: Failed password for root from 184.168.31.172 port 39966 ssh2 Feb 13 07:53:18.539127 kubelet[1834]: E0213 07:53:18.539049 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:18.961952 sshd[3128]: Received disconnect from 184.168.31.172 port 39966:11: Bye Bye [preauth] Feb 13 07:53:18.961952 sshd[3128]: Disconnected from authenticating user root 184.168.31.172 port 39966 [preauth] Feb 13 07:53:18.964429 systemd[1]: sshd@19-147.75.90.7:22-184.168.31.172:39966.service: Deactivated successfully. Feb 13 07:53:19.539408 kubelet[1834]: E0213 07:53:19.539294 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:20.540474 kubelet[1834]: E0213 07:53:20.540395 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:21.540666 kubelet[1834]: E0213 07:53:21.540551 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:22.540837 kubelet[1834]: E0213 07:53:22.540729 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:23.541441 kubelet[1834]: E0213 07:53:23.541390 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:24.541691 kubelet[1834]: E0213 07:53:24.541606 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:25.542395 kubelet[1834]: E0213 07:53:25.542266 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:26.543330 kubelet[1834]: E0213 07:53:26.543208 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:27.543484 kubelet[1834]: E0213 07:53:27.543371 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:28.544004 kubelet[1834]: E0213 07:53:28.543894 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:29.544916 kubelet[1834]: E0213 07:53:29.544857 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:30.546133 kubelet[1834]: E0213 07:53:30.546025 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:31.546286 kubelet[1834]: E0213 07:53:31.546211 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:32.546933 kubelet[1834]: E0213 07:53:32.546858 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:33.548054 kubelet[1834]: E0213 07:53:33.547973 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:34.548975 kubelet[1834]: E0213 07:53:34.548901 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:34.968084 kubelet[1834]: I0213 07:53:34.968039 1834 topology_manager.go:215] "Topology Admit Handler" podUID="5f7fbbb4-c068-42c7-a6a3-7066ee86b267" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 07:53:34.971260 systemd[1]: Created slice kubepods-besteffort-pod5f7fbbb4_c068_42c7_a6a3_7066ee86b267.slice. Feb 13 07:53:34.992718 kubelet[1834]: I0213 07:53:34.992676 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5f7fbbb4-c068-42c7-a6a3-7066ee86b267-data\") pod \"nfs-server-provisioner-0\" (UID: \"5f7fbbb4-c068-42c7-a6a3-7066ee86b267\") " pod="default/nfs-server-provisioner-0" Feb 13 07:53:34.992718 kubelet[1834]: I0213 07:53:34.992717 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxlh9\" (UniqueName: \"kubernetes.io/projected/5f7fbbb4-c068-42c7-a6a3-7066ee86b267-kube-api-access-vxlh9\") pod \"nfs-server-provisioner-0\" (UID: \"5f7fbbb4-c068-42c7-a6a3-7066ee86b267\") " pod="default/nfs-server-provisioner-0" Feb 13 07:53:35.274864 env[1469]: time="2024-02-13T07:53:35.274614243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5f7fbbb4-c068-42c7-a6a3-7066ee86b267,Namespace:default,Attempt:0,}" Feb 13 07:53:35.297771 systemd-networkd[1313]: lxcfe3c6d968c0b: Link UP Feb 13 07:53:35.316682 kernel: eth0: renamed from tmp47c8e Feb 13 07:53:35.345018 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:53:35.345219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfe3c6d968c0b: link becomes ready Feb 13 07:53:35.345282 systemd-networkd[1313]: lxcfe3c6d968c0b: Gained carrier Feb 13 07:53:35.400837 kubelet[1834]: E0213 07:53:35.400734 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:35.549852 kubelet[1834]: E0213 07:53:35.549804 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:35.594390 env[1469]: time="2024-02-13T07:53:35.594312410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:53:35.594390 env[1469]: time="2024-02-13T07:53:35.594334736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:53:35.594390 env[1469]: time="2024-02-13T07:53:35.594341960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:53:35.594516 env[1469]: time="2024-02-13T07:53:35.594401405Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47c8ef506b9eb5f14f34354c12b3a217226b28f53b3c06e280fda34c303a7dc5 pid=3198 runtime=io.containerd.runc.v2 Feb 13 07:53:35.600267 systemd[1]: Started cri-containerd-47c8ef506b9eb5f14f34354c12b3a217226b28f53b3c06e280fda34c303a7dc5.scope. Feb 13 07:53:35.622118 env[1469]: time="2024-02-13T07:53:35.622091328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5f7fbbb4-c068-42c7-a6a3-7066ee86b267,Namespace:default,Attempt:0,} returns sandbox id \"47c8ef506b9eb5f14f34354c12b3a217226b28f53b3c06e280fda34c303a7dc5\"" Feb 13 07:53:36.550951 kubelet[1834]: E0213 07:53:36.550837 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:36.897879 systemd-networkd[1313]: lxcfe3c6d968c0b: Gained IPv6LL Feb 13 07:53:37.552010 kubelet[1834]: E0213 07:53:37.551936 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:38.552706 kubelet[1834]: E0213 07:53:38.552597 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:39.552938 kubelet[1834]: E0213 07:53:39.552857 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:40.554064 kubelet[1834]: E0213 07:53:40.553986 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:41.554404 kubelet[1834]: E0213 07:53:41.554291 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:42.554522 kubelet[1834]: E0213 07:53:42.554443 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:43.555373 kubelet[1834]: E0213 07:53:43.555294 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:44.556154 kubelet[1834]: E0213 07:53:44.556053 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:45.556933 kubelet[1834]: E0213 07:53:45.556820 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:46.557985 kubelet[1834]: E0213 07:53:46.557869 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:47.558457 kubelet[1834]: E0213 07:53:47.558351 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:48.559507 kubelet[1834]: E0213 07:53:48.559398 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:49.559921 kubelet[1834]: E0213 07:53:49.559816 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:50.560809 kubelet[1834]: E0213 07:53:50.560693 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:51.561004 kubelet[1834]: E0213 07:53:51.560895 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:52.449099 systemd[1]: Started sshd@20-147.75.90.7:22-129.226.4.248:50460.service. Feb 13 07:53:52.561370 kubelet[1834]: E0213 07:53:52.561262 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:53.492729 sshd[3234]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=129.226.4.248 user=root Feb 13 07:53:53.562138 kubelet[1834]: E0213 07:53:53.562018 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:54.562351 kubelet[1834]: E0213 07:53:54.562246 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:55.401045 kubelet[1834]: E0213 07:53:55.400932 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:55.484670 sshd[3234]: Failed password for root from 129.226.4.248 port 50460 ssh2 Feb 13 07:53:55.563430 kubelet[1834]: E0213 07:53:55.563316 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:56.564572 kubelet[1834]: E0213 07:53:56.564439 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:56.759516 sshd[3234]: Received disconnect from 129.226.4.248 port 50460:11: Bye Bye [preauth] Feb 13 07:53:56.759516 sshd[3234]: Disconnected from authenticating user root 129.226.4.248 port 50460 [preauth] Feb 13 07:53:56.762058 systemd[1]: sshd@20-147.75.90.7:22-129.226.4.248:50460.service: Deactivated successfully. Feb 13 07:53:57.565440 kubelet[1834]: E0213 07:53:57.565334 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:58.566137 kubelet[1834]: E0213 07:53:58.566018 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:53:59.566937 kubelet[1834]: E0213 07:53:59.566828 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:00.567680 kubelet[1834]: E0213 07:54:00.567543 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:01.567928 kubelet[1834]: E0213 07:54:01.567805 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:02.568981 kubelet[1834]: E0213 07:54:02.568863 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:03.569625 kubelet[1834]: E0213 07:54:03.569586 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:04.569923 kubelet[1834]: E0213 07:54:04.569814 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:05.570727 kubelet[1834]: E0213 07:54:05.570656 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:06.571604 kubelet[1834]: E0213 07:54:06.571488 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:07.572717 kubelet[1834]: E0213 07:54:07.572601 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:08.112524 systemd[1]: Started sshd@21-147.75.90.7:22-128.199.168.119:44034.service. Feb 13 07:54:08.573603 kubelet[1834]: E0213 07:54:08.573536 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:09.157853 sshd[3238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=128.199.168.119 user=root Feb 13 07:54:09.574532 kubelet[1834]: E0213 07:54:09.574316 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:10.574688 kubelet[1834]: E0213 07:54:10.574584 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:11.345533 sshd[3238]: Failed password for root from 128.199.168.119 port 44034 ssh2 Feb 13 07:54:11.575675 kubelet[1834]: E0213 07:54:11.575540 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:11.754231 systemd[1]: Started sshd@22-147.75.90.7:22-184.168.31.172:58554.service. Feb 13 07:54:11.955704 sshd[3241]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:54:12.425445 sshd[3238]: Received disconnect from 128.199.168.119 port 44034:11: Bye Bye [preauth] Feb 13 07:54:12.425445 sshd[3238]: Disconnected from authenticating user root 128.199.168.119 port 44034 [preauth] Feb 13 07:54:12.427926 systemd[1]: sshd@21-147.75.90.7:22-128.199.168.119:44034.service: Deactivated successfully. Feb 13 07:54:12.576349 kubelet[1834]: E0213 07:54:12.576241 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:13.577235 kubelet[1834]: E0213 07:54:13.577126 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:13.751708 sshd[3241]: Failed password for root from 184.168.31.172 port 58554 ssh2 Feb 13 07:54:14.578345 kubelet[1834]: E0213 07:54:14.578236 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:15.054231 sshd[3241]: Received disconnect from 184.168.31.172 port 58554:11: Bye Bye [preauth] Feb 13 07:54:15.054231 sshd[3241]: Disconnected from authenticating user root 184.168.31.172 port 58554 [preauth] Feb 13 07:54:15.056717 systemd[1]: sshd@22-147.75.90.7:22-184.168.31.172:58554.service: Deactivated successfully. Feb 13 07:54:15.400951 kubelet[1834]: E0213 07:54:15.400842 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:15.579128 kubelet[1834]: E0213 07:54:15.579014 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:16.579536 kubelet[1834]: E0213 07:54:16.579425 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:17.579792 kubelet[1834]: E0213 07:54:17.579682 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:18.580921 kubelet[1834]: E0213 07:54:18.580813 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:19.581909 kubelet[1834]: E0213 07:54:19.581797 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:20.582807 kubelet[1834]: E0213 07:54:20.582699 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:21.583916 kubelet[1834]: E0213 07:54:21.583804 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:22.584597 kubelet[1834]: E0213 07:54:22.584461 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:23.584845 kubelet[1834]: E0213 07:54:23.584734 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:23.826904 systemd[1]: Started sshd@23-147.75.90.7:22-103.147.242.96:35780.service. Feb 13 07:54:24.585899 kubelet[1834]: E0213 07:54:24.585822 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:25.034848 sshd[3251]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.147.242.96 user=root Feb 13 07:54:25.586295 kubelet[1834]: E0213 07:54:25.586227 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:26.587408 kubelet[1834]: E0213 07:54:26.587217 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:27.418227 sshd[3251]: Failed password for root from 103.147.242.96 port 35780 ssh2 Feb 13 07:54:27.588277 kubelet[1834]: E0213 07:54:27.588169 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:28.334047 sshd[3251]: Received disconnect from 103.147.242.96 port 35780:11: Bye Bye [preauth] Feb 13 07:54:28.334047 sshd[3251]: Disconnected from authenticating user root 103.147.242.96 port 35780 [preauth] Feb 13 07:54:28.336094 systemd[1]: sshd@23-147.75.90.7:22-103.147.242.96:35780.service: Deactivated successfully. Feb 13 07:54:28.588828 kubelet[1834]: E0213 07:54:28.588604 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:29.589698 kubelet[1834]: E0213 07:54:29.589623 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:30.590835 kubelet[1834]: E0213 07:54:30.590712 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:31.591125 kubelet[1834]: E0213 07:54:31.591047 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:32.591434 kubelet[1834]: E0213 07:54:32.591359 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:33.592081 kubelet[1834]: E0213 07:54:33.592009 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:34.593338 kubelet[1834]: E0213 07:54:34.593223 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:35.401012 kubelet[1834]: E0213 07:54:35.400891 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:35.593911 kubelet[1834]: E0213 07:54:35.593804 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:36.594784 kubelet[1834]: E0213 07:54:36.594702 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:37.595469 kubelet[1834]: E0213 07:54:37.595362 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:38.596340 kubelet[1834]: E0213 07:54:38.596258 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:39.597120 kubelet[1834]: E0213 07:54:39.597033 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:40.598065 kubelet[1834]: E0213 07:54:40.597993 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:41.599239 kubelet[1834]: E0213 07:54:41.599170 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:42.600513 kubelet[1834]: E0213 07:54:42.600446 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:43.601623 kubelet[1834]: E0213 07:54:43.601504 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:44.602922 kubelet[1834]: E0213 07:54:44.602849 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:45.604080 kubelet[1834]: E0213 07:54:45.604006 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:46.604310 kubelet[1834]: E0213 07:54:46.604241 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:47.604930 kubelet[1834]: E0213 07:54:47.604853 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:48.605713 kubelet[1834]: E0213 07:54:48.605638 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:49.606544 kubelet[1834]: E0213 07:54:49.606470 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:50.607014 kubelet[1834]: E0213 07:54:50.606892 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:51.607981 kubelet[1834]: E0213 07:54:51.607864 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:52.608651 kubelet[1834]: E0213 07:54:52.608574 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:53.609595 kubelet[1834]: E0213 07:54:53.609454 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:54.610306 kubelet[1834]: E0213 07:54:54.610233 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:55.401173 kubelet[1834]: E0213 07:54:55.401051 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:55.610900 kubelet[1834]: E0213 07:54:55.610829 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:56.612280 kubelet[1834]: E0213 07:54:56.612071 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:57.612745 kubelet[1834]: E0213 07:54:57.612673 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:58.613752 kubelet[1834]: E0213 07:54:58.613633 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:54:59.614911 kubelet[1834]: E0213 07:54:59.614790 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:00.615325 kubelet[1834]: E0213 07:55:00.615204 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:01.615825 kubelet[1834]: E0213 07:55:01.615705 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:02.616675 kubelet[1834]: E0213 07:55:02.616534 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:03.617764 kubelet[1834]: E0213 07:55:03.617651 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:04.619022 kubelet[1834]: E0213 07:55:04.618905 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:05.620026 kubelet[1834]: E0213 07:55:05.619912 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:06.620829 kubelet[1834]: E0213 07:55:06.620715 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:07.621167 kubelet[1834]: E0213 07:55:07.621096 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:08.118382 systemd[1]: Started sshd@24-147.75.90.7:22-184.168.31.172:48564.service. Feb 13 07:55:08.323408 sshd[3259]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:55:08.621768 kubelet[1834]: E0213 07:55:08.621651 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:09.622844 kubelet[1834]: E0213 07:55:09.622772 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:09.944160 sshd[3259]: Failed password for root from 184.168.31.172 port 48564 ssh2 Feb 13 07:55:10.624057 kubelet[1834]: E0213 07:55:10.623940 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:11.424657 sshd[3259]: Received disconnect from 184.168.31.172 port 48564:11: Bye Bye [preauth] Feb 13 07:55:11.424657 sshd[3259]: Disconnected from authenticating user root 184.168.31.172 port 48564 [preauth] Feb 13 07:55:11.427214 systemd[1]: sshd@24-147.75.90.7:22-184.168.31.172:48564.service: Deactivated successfully. Feb 13 07:55:11.624526 kubelet[1834]: E0213 07:55:11.624425 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:12.359222 systemd[1]: Started sshd@25-147.75.90.7:22-128.199.168.119:34262.service. Feb 13 07:55:12.625686 kubelet[1834]: E0213 07:55:12.625556 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:13.368773 sshd[3263]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=128.199.168.119 user=root Feb 13 07:55:13.626052 kubelet[1834]: E0213 07:55:13.625825 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:14.627011 kubelet[1834]: E0213 07:55:14.626883 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:15.401007 kubelet[1834]: E0213 07:55:15.400887 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:15.627887 kubelet[1834]: E0213 07:55:15.627799 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:15.676068 sshd[3263]: Failed password for root from 128.199.168.119 port 34262 ssh2 Feb 13 07:55:16.628083 kubelet[1834]: E0213 07:55:16.628003 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:16.629208 sshd[3263]: Received disconnect from 128.199.168.119 port 34262:11: Bye Bye [preauth] Feb 13 07:55:16.629208 sshd[3263]: Disconnected from authenticating user root 128.199.168.119 port 34262 [preauth] Feb 13 07:55:16.631703 systemd[1]: sshd@25-147.75.90.7:22-128.199.168.119:34262.service: Deactivated successfully. Feb 13 07:55:17.628511 kubelet[1834]: E0213 07:55:17.628430 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:18.629582 kubelet[1834]: E0213 07:55:18.629444 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:19.630578 kubelet[1834]: E0213 07:55:19.630448 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:20.631782 kubelet[1834]: E0213 07:55:20.631663 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:21.632629 kubelet[1834]: E0213 07:55:21.632545 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:22.633709 kubelet[1834]: E0213 07:55:22.633631 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:23.379653 systemd[1]: Started sshd@26-147.75.90.7:22-129.226.4.248:55924.service. Feb 13 07:55:23.634000 kubelet[1834]: E0213 07:55:23.633791 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:24.435381 sshd[3274]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=129.226.4.248 user=root Feb 13 07:55:24.634439 kubelet[1834]: E0213 07:55:24.634321 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:25.635270 kubelet[1834]: E0213 07:55:25.635163 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:26.587690 sshd[3274]: Failed password for root from 129.226.4.248 port 55924 ssh2 Feb 13 07:55:26.635896 kubelet[1834]: E0213 07:55:26.635789 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:27.636729 kubelet[1834]: E0213 07:55:27.636614 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:27.705182 sshd[3274]: Received disconnect from 129.226.4.248 port 55924:11: Bye Bye [preauth] Feb 13 07:55:27.705182 sshd[3274]: Disconnected from authenticating user root 129.226.4.248 port 55924 [preauth] Feb 13 07:55:27.707772 systemd[1]: sshd@26-147.75.90.7:22-129.226.4.248:55924.service: Deactivated successfully. Feb 13 07:55:28.637515 kubelet[1834]: E0213 07:55:28.637405 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:29.638503 kubelet[1834]: E0213 07:55:29.638388 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:30.639594 kubelet[1834]: E0213 07:55:30.639461 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:31.640438 kubelet[1834]: E0213 07:55:31.640318 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:32.641189 kubelet[1834]: E0213 07:55:32.641067 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:33.642184 kubelet[1834]: E0213 07:55:33.642071 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:34.642487 kubelet[1834]: E0213 07:55:34.642374 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:35.401174 kubelet[1834]: E0213 07:55:35.401067 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:35.643406 kubelet[1834]: E0213 07:55:35.643295 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:36.644415 kubelet[1834]: E0213 07:55:36.644295 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:37.644604 kubelet[1834]: E0213 07:55:37.644501 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:38.645536 kubelet[1834]: E0213 07:55:38.645417 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:39.646299 kubelet[1834]: E0213 07:55:39.646181 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:39.725462 systemd[1]: Started sshd@27-147.75.90.7:22-103.147.242.96:45316.service. Feb 13 07:55:40.646984 kubelet[1834]: E0213 07:55:40.646865 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:40.934207 sshd[3279]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.147.242.96 user=root Feb 13 07:55:41.647391 kubelet[1834]: E0213 07:55:41.647280 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:42.648620 kubelet[1834]: E0213 07:55:42.648511 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:43.282278 sshd[3279]: Failed password for root from 103.147.242.96 port 45316 ssh2 Feb 13 07:55:43.649667 kubelet[1834]: E0213 07:55:43.649536 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:44.234890 sshd[3279]: Received disconnect from 103.147.242.96 port 45316:11: Bye Bye [preauth] Feb 13 07:55:44.234890 sshd[3279]: Disconnected from authenticating user root 103.147.242.96 port 45316 [preauth] Feb 13 07:55:44.237378 systemd[1]: sshd@27-147.75.90.7:22-103.147.242.96:45316.service: Deactivated successfully. Feb 13 07:55:44.650809 kubelet[1834]: E0213 07:55:44.650735 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:45.651947 kubelet[1834]: E0213 07:55:45.651813 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:46.652217 kubelet[1834]: E0213 07:55:46.652140 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:47.652430 kubelet[1834]: E0213 07:55:47.652319 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:48.652881 kubelet[1834]: E0213 07:55:48.652769 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:49.653121 kubelet[1834]: E0213 07:55:49.653005 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:50.653274 kubelet[1834]: E0213 07:55:50.653165 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:51.653599 kubelet[1834]: E0213 07:55:51.653489 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:52.654428 kubelet[1834]: E0213 07:55:52.654311 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:53.655219 kubelet[1834]: E0213 07:55:53.655103 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:54.656016 kubelet[1834]: E0213 07:55:54.655890 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:55.373545 systemd[1]: Started sshd@28-147.75.90.7:22-188.18.49.50:40644.service. Feb 13 07:55:55.400864 kubelet[1834]: E0213 07:55:55.400834 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:55.656784 kubelet[1834]: E0213 07:55:55.656592 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:56.656960 kubelet[1834]: E0213 07:55:56.656799 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:56.712715 sshd[3286]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.18.49.50 user=root Feb 13 07:55:57.657833 kubelet[1834]: E0213 07:55:57.657713 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:57.921966 sshd[3286]: Failed password for root from 188.18.49.50 port 40644 ssh2 Feb 13 07:55:58.503278 sshd[3286]: Received disconnect from 188.18.49.50 port 40644:11: Bye Bye [preauth] Feb 13 07:55:58.503278 sshd[3286]: Disconnected from authenticating user root 188.18.49.50 port 40644 [preauth] Feb 13 07:55:58.505858 systemd[1]: sshd@28-147.75.90.7:22-188.18.49.50:40644.service: Deactivated successfully. Feb 13 07:55:58.658077 kubelet[1834]: E0213 07:55:58.657959 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:55:59.659145 kubelet[1834]: E0213 07:55:59.659041 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:00.660302 kubelet[1834]: E0213 07:56:00.660187 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:01.660482 kubelet[1834]: E0213 07:56:01.660417 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:02.661321 kubelet[1834]: E0213 07:56:02.661253 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:03.662249 kubelet[1834]: E0213 07:56:03.662123 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:04.662849 kubelet[1834]: E0213 07:56:04.662737 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:05.664009 kubelet[1834]: E0213 07:56:05.663929 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:05.973038 systemd[1]: Started sshd@29-147.75.90.7:22-184.168.31.172:39624.service. Feb 13 07:56:06.177689 sshd[3290]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:56:06.664289 kubelet[1834]: E0213 07:56:06.664177 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:07.664853 kubelet[1834]: E0213 07:56:07.664742 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:08.094233 sshd[3290]: Failed password for root from 184.168.31.172 port 39624 ssh2 Feb 13 07:56:08.665095 kubelet[1834]: E0213 07:56:08.664980 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:09.276603 sshd[3290]: Received disconnect from 184.168.31.172 port 39624:11: Bye Bye [preauth] Feb 13 07:56:09.276603 sshd[3290]: Disconnected from authenticating user root 184.168.31.172 port 39624 [preauth] Feb 13 07:56:09.279179 systemd[1]: sshd@29-147.75.90.7:22-184.168.31.172:39624.service: Deactivated successfully. Feb 13 07:56:09.665860 kubelet[1834]: E0213 07:56:09.665763 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:10.666088 kubelet[1834]: E0213 07:56:10.666023 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:11.666417 kubelet[1834]: E0213 07:56:11.666306 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:12.666828 kubelet[1834]: E0213 07:56:12.666713 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:13.667741 kubelet[1834]: E0213 07:56:13.667627 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:14.668230 kubelet[1834]: E0213 07:56:14.668117 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:15.155554 systemd[1]: Started sshd@30-147.75.90.7:22-128.199.168.119:52712.service. Feb 13 07:56:15.400740 kubelet[1834]: E0213 07:56:15.400618 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:15.668682 kubelet[1834]: E0213 07:56:15.668540 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:16.125311 sshd[3294]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=128.199.168.119 user=root Feb 13 07:56:16.669745 kubelet[1834]: E0213 07:56:16.669625 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:17.670694 kubelet[1834]: E0213 07:56:17.670546 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:18.413470 sshd[3294]: Failed password for root from 128.199.168.119 port 52712 ssh2 Feb 13 07:56:18.672125 kubelet[1834]: E0213 07:56:18.671899 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:19.377854 sshd[3294]: Received disconnect from 128.199.168.119 port 52712:11: Bye Bye [preauth] Feb 13 07:56:19.377854 sshd[3294]: Disconnected from authenticating user root 128.199.168.119 port 52712 [preauth] Feb 13 07:56:19.380545 systemd[1]: sshd@30-147.75.90.7:22-128.199.168.119:52712.service: Deactivated successfully. Feb 13 07:56:19.672833 kubelet[1834]: E0213 07:56:19.672612 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:20.672857 kubelet[1834]: E0213 07:56:20.672771 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:21.673410 kubelet[1834]: E0213 07:56:21.673334 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:22.673624 kubelet[1834]: E0213 07:56:22.673496 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:23.674676 kubelet[1834]: E0213 07:56:23.674574 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:24.675602 kubelet[1834]: E0213 07:56:24.675486 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:25.676171 kubelet[1834]: E0213 07:56:25.676106 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:26.676739 kubelet[1834]: E0213 07:56:26.676683 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:27.677927 kubelet[1834]: E0213 07:56:27.677802 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:28.678926 kubelet[1834]: E0213 07:56:28.678812 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:29.679848 kubelet[1834]: E0213 07:56:29.679785 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:30.680473 kubelet[1834]: E0213 07:56:30.680358 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:31.681750 kubelet[1834]: E0213 07:56:31.681635 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:32.682017 kubelet[1834]: E0213 07:56:32.681895 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:33.682188 kubelet[1834]: E0213 07:56:33.682076 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:34.683008 kubelet[1834]: E0213 07:56:34.682894 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:35.401399 kubelet[1834]: E0213 07:56:35.401289 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:35.683415 kubelet[1834]: E0213 07:56:35.683183 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:36.683668 kubelet[1834]: E0213 07:56:36.683535 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:36.772775 env[1469]: time="2024-02-13T07:56:36.772728677Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:56:36.775631 env[1469]: time="2024-02-13T07:56:36.775616924Z" level=info msg="StopContainer for \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\" with timeout 2 (s)" Feb 13 07:56:36.775854 env[1469]: time="2024-02-13T07:56:36.775826799Z" level=info msg="Stop container \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\" with signal terminated" Feb 13 07:56:36.778957 systemd-networkd[1313]: lxc_health: Link DOWN Feb 13 07:56:36.778961 systemd-networkd[1313]: lxc_health: Lost carrier Feb 13 07:56:36.843909 systemd[1]: cri-containerd-83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0.scope: Deactivated successfully. Feb 13 07:56:36.844108 systemd[1]: cri-containerd-83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0.scope: Consumed 6.557s CPU time. Feb 13 07:56:36.852385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0-rootfs.mount: Deactivated successfully. Feb 13 07:56:36.854420 env[1469]: time="2024-02-13T07:56:36.854390626Z" level=info msg="shim disconnected" id=83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0 Feb 13 07:56:36.854496 env[1469]: time="2024-02-13T07:56:36.854422003Z" level=warning msg="cleaning up after shim disconnected" id=83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0 namespace=k8s.io Feb 13 07:56:36.854496 env[1469]: time="2024-02-13T07:56:36.854428863Z" level=info msg="cleaning up dead shim" Feb 13 07:56:36.858459 env[1469]: time="2024-02-13T07:56:36.858440190Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:56:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3342 runtime=io.containerd.runc.v2\n" Feb 13 07:56:36.859345 env[1469]: time="2024-02-13T07:56:36.859305026Z" level=info msg="StopContainer for \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\" returns successfully" Feb 13 07:56:36.859704 env[1469]: time="2024-02-13T07:56:36.859655924Z" level=info msg="StopPodSandbox for \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\"" Feb 13 07:56:36.859704 env[1469]: time="2024-02-13T07:56:36.859696808Z" level=info msg="Container to stop \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:56:36.859868 env[1469]: time="2024-02-13T07:56:36.859707094Z" level=info msg="Container to stop \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:56:36.859868 env[1469]: time="2024-02-13T07:56:36.859714870Z" level=info msg="Container to stop \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:56:36.859868 env[1469]: time="2024-02-13T07:56:36.859722216Z" level=info msg="Container to stop \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:56:36.859868 env[1469]: time="2024-02-13T07:56:36.859728674Z" level=info msg="Container to stop \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:56:36.860833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79-shm.mount: Deactivated successfully. Feb 13 07:56:36.863179 systemd[1]: cri-containerd-6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79.scope: Deactivated successfully. Feb 13 07:56:36.877590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79-rootfs.mount: Deactivated successfully. Feb 13 07:56:36.910649 env[1469]: time="2024-02-13T07:56:36.910488890Z" level=info msg="shim disconnected" id=6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79 Feb 13 07:56:36.910910 env[1469]: time="2024-02-13T07:56:36.910639488Z" level=warning msg="cleaning up after shim disconnected" id=6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79 namespace=k8s.io Feb 13 07:56:36.910910 env[1469]: time="2024-02-13T07:56:36.910674694Z" level=info msg="cleaning up dead shim" Feb 13 07:56:36.923103 env[1469]: time="2024-02-13T07:56:36.923054720Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:56:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3373 runtime=io.containerd.runc.v2\n" Feb 13 07:56:36.923251 env[1469]: time="2024-02-13T07:56:36.923212465Z" level=info msg="TearDown network for sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" successfully" Feb 13 07:56:36.923251 env[1469]: time="2024-02-13T07:56:36.923225649Z" level=info msg="StopPodSandbox for \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" returns successfully" Feb 13 07:56:37.014625 kubelet[1834]: I0213 07:56:37.014383 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-etc-cni-netd\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.014625 kubelet[1834]: I0213 07:56:37.014377 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.014625 kubelet[1834]: I0213 07:56:37.014510 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-hubble-tls\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.014625 kubelet[1834]: I0213 07:56:37.014616 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlqhw\" (UniqueName: \"kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-kube-api-access-xlqhw\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.015431 kubelet[1834]: I0213 07:56:37.014718 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-lib-modules\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.015431 kubelet[1834]: I0213 07:56:37.014822 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-kernel\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.015431 kubelet[1834]: I0213 07:56:37.014871 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.015431 kubelet[1834]: I0213 07:56:37.014942 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79b4f201-1a80-417f-a771-7e4c634129c6-clustermesh-secrets\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.015431 kubelet[1834]: I0213 07:56:37.015001 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.016176 kubelet[1834]: I0213 07:56:37.015048 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-bpf-maps\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.016176 kubelet[1834]: I0213 07:56:37.015139 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.016176 kubelet[1834]: I0213 07:56:37.015245 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-xtables-lock\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.016176 kubelet[1834]: I0213 07:56:37.015347 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-run\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.016176 kubelet[1834]: I0213 07:56:37.015337 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.016176 kubelet[1834]: I0213 07:56:37.015442 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-hostproc\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.016999 kubelet[1834]: I0213 07:56:37.015443 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.016999 kubelet[1834]: I0213 07:56:37.015544 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-cgroup\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.016999 kubelet[1834]: I0213 07:56:37.015526 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-hostproc" (OuterVolumeSpecName: "hostproc") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.016999 kubelet[1834]: I0213 07:56:37.015679 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-config-path\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.016999 kubelet[1834]: I0213 07:56:37.015673 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.017652 kubelet[1834]: I0213 07:56:37.015742 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cni-path\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.017652 kubelet[1834]: I0213 07:56:37.015799 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-net\") pod \"79b4f201-1a80-417f-a771-7e4c634129c6\" (UID: \"79b4f201-1a80-417f-a771-7e4c634129c6\") " Feb 13 07:56:37.017652 kubelet[1834]: I0213 07:56:37.015838 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cni-path" (OuterVolumeSpecName: "cni-path") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.017652 kubelet[1834]: I0213 07:56:37.015843 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:37.017652 kubelet[1834]: I0213 07:56:37.015902 1834 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-etc-cni-netd\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.017652 kubelet[1834]: I0213 07:56:37.015944 1834 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-lib-modules\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.018367 kubelet[1834]: I0213 07:56:37.015978 1834 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-kernel\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.018367 kubelet[1834]: I0213 07:56:37.016010 1834 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-bpf-maps\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.018367 kubelet[1834]: I0213 07:56:37.016040 1834 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-xtables-lock\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.018367 kubelet[1834]: I0213 07:56:37.016069 1834 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-run\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.018367 kubelet[1834]: I0213 07:56:37.016097 1834 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-hostproc\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.018367 kubelet[1834]: I0213 07:56:37.016125 1834 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-cgroup\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.019855 kubelet[1834]: I0213 07:56:37.019842 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-kube-api-access-xlqhw" (OuterVolumeSpecName: "kube-api-access-xlqhw") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "kube-api-access-xlqhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:56:37.019916 kubelet[1834]: I0213 07:56:37.019891 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:56:37.020154 kubelet[1834]: I0213 07:56:37.020144 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b4f201-1a80-417f-a771-7e4c634129c6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:56:37.020238 kubelet[1834]: I0213 07:56:37.020230 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79b4f201-1a80-417f-a771-7e4c634129c6" (UID: "79b4f201-1a80-417f-a771-7e4c634129c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:56:37.020670 systemd[1]: var-lib-kubelet-pods-79b4f201\x2d1a80\x2d417f\x2da771\x2d7e4c634129c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxlqhw.mount: Deactivated successfully. Feb 13 07:56:37.020724 systemd[1]: var-lib-kubelet-pods-79b4f201\x2d1a80\x2d417f\x2da771\x2d7e4c634129c6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:56:37.116430 kubelet[1834]: I0213 07:56:37.116322 1834 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-hubble-tls\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.116430 kubelet[1834]: I0213 07:56:37.116404 1834 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xlqhw\" (UniqueName: \"kubernetes.io/projected/79b4f201-1a80-417f-a771-7e4c634129c6-kube-api-access-xlqhw\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.116430 kubelet[1834]: I0213 07:56:37.116440 1834 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79b4f201-1a80-417f-a771-7e4c634129c6-clustermesh-secrets\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.116975 kubelet[1834]: I0213 07:56:37.116473 1834 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79b4f201-1a80-417f-a771-7e4c634129c6-cilium-config-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.116975 kubelet[1834]: I0213 07:56:37.116505 1834 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-cni-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.116975 kubelet[1834]: I0213 07:56:37.116534 1834 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79b4f201-1a80-417f-a771-7e4c634129c6-host-proc-sys-net\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:37.501272 systemd[1]: Removed slice kubepods-burstable-pod79b4f201_1a80_417f_a771_7e4c634129c6.slice. Feb 13 07:56:37.501325 systemd[1]: kubepods-burstable-pod79b4f201_1a80_417f_a771_7e4c634129c6.slice: Consumed 6.605s CPU time. Feb 13 07:56:37.563687 kubelet[1834]: I0213 07:56:37.563602 1834 scope.go:117] "RemoveContainer" containerID="83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0" Feb 13 07:56:37.566535 env[1469]: time="2024-02-13T07:56:37.566435944Z" level=info msg="RemoveContainer for \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\"" Feb 13 07:56:37.570220 env[1469]: time="2024-02-13T07:56:37.570104623Z" level=info msg="RemoveContainer for \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\" returns successfully" Feb 13 07:56:37.570709 kubelet[1834]: I0213 07:56:37.570621 1834 scope.go:117] "RemoveContainer" containerID="225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a" Feb 13 07:56:37.573302 env[1469]: time="2024-02-13T07:56:37.573193515Z" level=info msg="RemoveContainer for \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\"" Feb 13 07:56:37.576790 env[1469]: time="2024-02-13T07:56:37.576678689Z" level=info msg="RemoveContainer for \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\" returns successfully" Feb 13 07:56:37.577208 kubelet[1834]: I0213 07:56:37.577120 1834 scope.go:117] "RemoveContainer" containerID="e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37" Feb 13 07:56:37.581346 env[1469]: time="2024-02-13T07:56:37.581095972Z" level=info msg="RemoveContainer for \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\"" Feb 13 07:56:37.582790 env[1469]: time="2024-02-13T07:56:37.582777843Z" level=info msg="RemoveContainer for \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\" returns successfully" Feb 13 07:56:37.582922 kubelet[1834]: I0213 07:56:37.582877 1834 scope.go:117] "RemoveContainer" containerID="89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0" Feb 13 07:56:37.583367 env[1469]: time="2024-02-13T07:56:37.583354338Z" level=info msg="RemoveContainer for \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\"" Feb 13 07:56:37.584426 env[1469]: time="2024-02-13T07:56:37.584406147Z" level=info msg="RemoveContainer for \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\" returns successfully" Feb 13 07:56:37.584528 kubelet[1834]: I0213 07:56:37.584517 1834 scope.go:117] "RemoveContainer" containerID="f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6" Feb 13 07:56:37.585147 env[1469]: time="2024-02-13T07:56:37.585131335Z" level=info msg="RemoveContainer for \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\"" Feb 13 07:56:37.586029 env[1469]: time="2024-02-13T07:56:37.586018848Z" level=info msg="RemoveContainer for \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\" returns successfully" Feb 13 07:56:37.586104 kubelet[1834]: I0213 07:56:37.586097 1834 scope.go:117] "RemoveContainer" containerID="83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0" Feb 13 07:56:37.586308 env[1469]: time="2024-02-13T07:56:37.586254722Z" level=error msg="ContainerStatus for \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\": not found" Feb 13 07:56:37.586410 kubelet[1834]: E0213 07:56:37.586405 1834 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\": not found" containerID="83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0" Feb 13 07:56:37.586453 kubelet[1834]: I0213 07:56:37.586448 1834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0"} err="failed to get container status \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\": rpc error: code = NotFound desc = an error occurred when try to find container \"83e7e937c005119f70e35a1d34877bbcf55a7423ead01c806620de43370aaed0\": not found" Feb 13 07:56:37.586478 kubelet[1834]: I0213 07:56:37.586455 1834 scope.go:117] "RemoveContainer" containerID="225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a" Feb 13 07:56:37.586565 env[1469]: time="2024-02-13T07:56:37.586538182Z" level=error msg="ContainerStatus for \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\": not found" Feb 13 07:56:37.586727 kubelet[1834]: E0213 07:56:37.586677 1834 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\": not found" containerID="225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a" Feb 13 07:56:37.586727 kubelet[1834]: I0213 07:56:37.586724 1834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a"} err="failed to get container status \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"225a56515b6a6f00cdec141c6ab039996d0a05d3e00fb385f9437da44f203c4a\": not found" Feb 13 07:56:37.586727 kubelet[1834]: I0213 07:56:37.586745 1834 scope.go:117] "RemoveContainer" containerID="e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37" Feb 13 07:56:37.586887 env[1469]: time="2024-02-13T07:56:37.586865157Z" level=error msg="ContainerStatus for \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\": not found" Feb 13 07:56:37.586955 kubelet[1834]: E0213 07:56:37.586949 1834 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\": not found" containerID="e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37" Feb 13 07:56:37.586996 kubelet[1834]: I0213 07:56:37.586963 1834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37"} err="failed to get container status \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\": rpc error: code = NotFound desc = an error occurred when try to find container \"e534ba72a970e2e6e5b5894b479782dd93b569064bf54117d2c9b72c5ac39c37\": not found" Feb 13 07:56:37.586996 kubelet[1834]: I0213 07:56:37.586968 1834 scope.go:117] "RemoveContainer" containerID="89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0" Feb 13 07:56:37.587091 env[1469]: time="2024-02-13T07:56:37.587068769Z" level=error msg="ContainerStatus for \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\": not found" Feb 13 07:56:37.587143 kubelet[1834]: E0213 07:56:37.587137 1834 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\": not found" containerID="89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0" Feb 13 07:56:37.587167 kubelet[1834]: I0213 07:56:37.587151 1834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0"} err="failed to get container status \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"89b012023f45fa0022b1765ae52247c62e526a82aafcfa83de14a1d600b1f8e0\": not found" Feb 13 07:56:37.587167 kubelet[1834]: I0213 07:56:37.587157 1834 scope.go:117] "RemoveContainer" containerID="f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6" Feb 13 07:56:37.587260 env[1469]: time="2024-02-13T07:56:37.587224169Z" level=error msg="ContainerStatus for \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\": not found" Feb 13 07:56:37.587313 kubelet[1834]: E0213 07:56:37.587309 1834 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\": not found" containerID="f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6" Feb 13 07:56:37.587356 kubelet[1834]: I0213 07:56:37.587335 1834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6"} err="failed to get container status \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f815d74957a3c99ca41d41ab1878aca0b75bfb3d777046faaad31b115a8da3c6\": not found" Feb 13 07:56:37.684505 kubelet[1834]: E0213 07:56:37.684392 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:37.769105 systemd[1]: var-lib-kubelet-pods-79b4f201\x2d1a80\x2d417f\x2da771\x2d7e4c634129c6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:56:38.685232 kubelet[1834]: E0213 07:56:38.685107 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:39.071537 kubelet[1834]: I0213 07:56:39.071354 1834 topology_manager.go:215] "Topology Admit Handler" podUID="aa64418d-4e87-477f-ace4-bafdf45906d6" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-wmwg8" Feb 13 07:56:39.071537 kubelet[1834]: E0213 07:56:39.071504 1834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" containerName="clean-cilium-state" Feb 13 07:56:39.071537 kubelet[1834]: E0213 07:56:39.071534 1834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" containerName="cilium-agent" Feb 13 07:56:39.071537 kubelet[1834]: E0213 07:56:39.071553 1834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" containerName="mount-cgroup" Feb 13 07:56:39.072227 kubelet[1834]: E0213 07:56:39.071598 1834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" containerName="apply-sysctl-overwrites" Feb 13 07:56:39.072227 kubelet[1834]: E0213 07:56:39.071621 1834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" containerName="mount-bpf-fs" Feb 13 07:56:39.072227 kubelet[1834]: I0213 07:56:39.071666 1834 memory_manager.go:346] "RemoveStaleState removing state" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" containerName="cilium-agent" Feb 13 07:56:39.078856 kubelet[1834]: I0213 07:56:39.078815 1834 topology_manager.go:215] "Topology Admit Handler" podUID="6137c99d-82ea-4fd4-a0b0-93789ace54df" podNamespace="kube-system" podName="cilium-7hck4" Feb 13 07:56:39.086287 systemd[1]: Created slice kubepods-besteffort-podaa64418d_4e87_477f_ace4_bafdf45906d6.slice. Feb 13 07:56:39.098031 systemd[1]: Created slice kubepods-burstable-pod6137c99d_82ea_4fd4_a0b0_93789ace54df.slice. Feb 13 07:56:39.131042 kubelet[1834]: I0213 07:56:39.130975 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa64418d-4e87-477f-ace4-bafdf45906d6-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-wmwg8\" (UID: \"aa64418d-4e87-477f-ace4-bafdf45906d6\") " pod="kube-system/cilium-operator-6bc8ccdb58-wmwg8" Feb 13 07:56:39.131366 kubelet[1834]: I0213 07:56:39.131083 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-bpf-maps\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.131366 kubelet[1834]: I0213 07:56:39.131242 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-cgroup\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.131685 kubelet[1834]: I0213 07:56:39.131402 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-hubble-tls\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.131685 kubelet[1834]: I0213 07:56:39.131552 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-xtables-lock\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.131685 kubelet[1834]: I0213 07:56:39.131678 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cshj\" (UniqueName: \"kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-kube-api-access-6cshj\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132100 kubelet[1834]: I0213 07:56:39.131758 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8bm\" (UniqueName: \"kubernetes.io/projected/aa64418d-4e87-477f-ace4-bafdf45906d6-kube-api-access-vd8bm\") pod \"cilium-operator-6bc8ccdb58-wmwg8\" (UID: \"aa64418d-4e87-477f-ace4-bafdf45906d6\") " pod="kube-system/cilium-operator-6bc8ccdb58-wmwg8" Feb 13 07:56:39.132100 kubelet[1834]: I0213 07:56:39.131911 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-run\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132100 kubelet[1834]: I0213 07:56:39.132008 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cni-path\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132100 kubelet[1834]: I0213 07:56:39.132076 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-config-path\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132643 kubelet[1834]: I0213 07:56:39.132139 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-ipsec-secrets\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132643 kubelet[1834]: I0213 07:56:39.132284 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-net\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132643 kubelet[1834]: I0213 07:56:39.132414 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-hostproc\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132643 kubelet[1834]: I0213 07:56:39.132502 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-etc-cni-netd\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.132643 kubelet[1834]: I0213 07:56:39.132576 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-lib-modules\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.133236 kubelet[1834]: I0213 07:56:39.132701 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-kernel\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.133236 kubelet[1834]: I0213 07:56:39.132809 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-clustermesh-secrets\") pod \"cilium-7hck4\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " pod="kube-system/cilium-7hck4" Feb 13 07:56:39.226932 kubelet[1834]: E0213 07:56:39.226872 1834 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-6cshj lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-7hck4" podUID="6137c99d-82ea-4fd4-a0b0-93789ace54df" Feb 13 07:56:39.393065 env[1469]: time="2024-02-13T07:56:39.392937395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wmwg8,Uid:aa64418d-4e87-477f-ace4-bafdf45906d6,Namespace:kube-system,Attempt:0,}" Feb 13 07:56:39.408698 env[1469]: time="2024-02-13T07:56:39.408611437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:56:39.408698 env[1469]: time="2024-02-13T07:56:39.408632586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:56:39.408698 env[1469]: time="2024-02-13T07:56:39.408639457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:56:39.408791 env[1469]: time="2024-02-13T07:56:39.408727271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2145ea94fae3f6df1ec4eaf23c4e057e87439e86434cb75ca56c648b54e73caf pid=3400 runtime=io.containerd.runc.v2 Feb 13 07:56:39.414423 systemd[1]: Started cri-containerd-2145ea94fae3f6df1ec4eaf23c4e057e87439e86434cb75ca56c648b54e73caf.scope. Feb 13 07:56:39.439167 env[1469]: time="2024-02-13T07:56:39.439141001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wmwg8,Uid:aa64418d-4e87-477f-ace4-bafdf45906d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2145ea94fae3f6df1ec4eaf23c4e057e87439e86434cb75ca56c648b54e73caf\"" Feb 13 07:56:39.501323 kubelet[1834]: I0213 07:56:39.501220 1834 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="79b4f201-1a80-417f-a771-7e4c634129c6" path="/var/lib/kubelet/pods/79b4f201-1a80-417f-a771-7e4c634129c6/volumes" Feb 13 07:56:39.638846 kubelet[1834]: I0213 07:56:39.638732 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-net\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.638846 kubelet[1834]: I0213 07:56:39.638827 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-lib-modules\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.639380 kubelet[1834]: I0213 07:56:39.638885 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cni-path\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.639380 kubelet[1834]: I0213 07:56:39.638920 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.639380 kubelet[1834]: I0213 07:56:39.638964 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.639380 kubelet[1834]: I0213 07:56:39.638956 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-ipsec-secrets\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.639380 kubelet[1834]: I0213 07:56:39.639018 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cni-path" (OuterVolumeSpecName: "cni-path") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.640204 kubelet[1834]: I0213 07:56:39.639100 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-cgroup\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.640204 kubelet[1834]: I0213 07:56:39.639177 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-hubble-tls\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.640204 kubelet[1834]: I0213 07:56:39.639157 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.640204 kubelet[1834]: I0213 07:56:39.639234 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-hostproc\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.640204 kubelet[1834]: I0213 07:56:39.639290 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-kernel\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.640204 kubelet[1834]: I0213 07:56:39.639365 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-clustermesh-secrets\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.640963 kubelet[1834]: I0213 07:56:39.639346 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-hostproc" (OuterVolumeSpecName: "hostproc") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.640963 kubelet[1834]: I0213 07:56:39.639419 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-bpf-maps\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.640963 kubelet[1834]: I0213 07:56:39.639421 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.640963 kubelet[1834]: I0213 07:56:39.639493 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-etc-cni-netd\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.640963 kubelet[1834]: I0213 07:56:39.639551 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-xtables-lock\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.641500 kubelet[1834]: I0213 07:56:39.639538 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.641500 kubelet[1834]: I0213 07:56:39.639650 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cshj\" (UniqueName: \"kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-kube-api-access-6cshj\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.641500 kubelet[1834]: I0213 07:56:39.639668 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.641500 kubelet[1834]: I0213 07:56:39.639688 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.641500 kubelet[1834]: I0213 07:56:39.639747 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-run\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.642074 kubelet[1834]: I0213 07:56:39.639800 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:56:39.642074 kubelet[1834]: I0213 07:56:39.639880 1834 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-config-path\") pod \"6137c99d-82ea-4fd4-a0b0-93789ace54df\" (UID: \"6137c99d-82ea-4fd4-a0b0-93789ace54df\") " Feb 13 07:56:39.642074 kubelet[1834]: I0213 07:56:39.639999 1834 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-run\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642074 kubelet[1834]: I0213 07:56:39.640066 1834 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cni-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642074 kubelet[1834]: I0213 07:56:39.640126 1834 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-net\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642074 kubelet[1834]: I0213 07:56:39.640182 1834 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-lib-modules\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642074 kubelet[1834]: I0213 07:56:39.640231 1834 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-hostproc\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642802 kubelet[1834]: I0213 07:56:39.640290 1834 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-host-proc-sys-kernel\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642802 kubelet[1834]: I0213 07:56:39.640345 1834 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-bpf-maps\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642802 kubelet[1834]: I0213 07:56:39.640400 1834 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-cgroup\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642802 kubelet[1834]: I0213 07:56:39.640439 1834 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-xtables-lock\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.642802 kubelet[1834]: I0213 07:56:39.640468 1834 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6137c99d-82ea-4fd4-a0b0-93789ace54df-etc-cni-netd\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.644198 kubelet[1834]: I0213 07:56:39.644130 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:56:39.644529 kubelet[1834]: I0213 07:56:39.644503 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:56:39.644601 kubelet[1834]: I0213 07:56:39.644542 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:56:39.644657 kubelet[1834]: I0213 07:56:39.644601 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-kube-api-access-6cshj" (OuterVolumeSpecName: "kube-api-access-6cshj") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "kube-api-access-6cshj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:56:39.644690 kubelet[1834]: I0213 07:56:39.644653 1834 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6137c99d-82ea-4fd4-a0b0-93789ace54df" (UID: "6137c99d-82ea-4fd4-a0b0-93789ace54df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:56:39.686145 kubelet[1834]: E0213 07:56:39.686115 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:39.741591 kubelet[1834]: I0213 07:56:39.741489 1834 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-ipsec-secrets\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.741591 kubelet[1834]: I0213 07:56:39.741594 1834 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6137c99d-82ea-4fd4-a0b0-93789ace54df-clustermesh-secrets\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.742068 kubelet[1834]: I0213 07:56:39.741654 1834 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-hubble-tls\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.742068 kubelet[1834]: I0213 07:56:39.741706 1834 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6cshj\" (UniqueName: \"kubernetes.io/projected/6137c99d-82ea-4fd4-a0b0-93789ace54df-kube-api-access-6cshj\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:39.742068 kubelet[1834]: I0213 07:56:39.741757 1834 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6137c99d-82ea-4fd4-a0b0-93789ace54df-cilium-config-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 07:56:40.242748 systemd[1]: var-lib-kubelet-pods-6137c99d\x2d82ea\x2d4fd4\x2da0b0\x2d93789ace54df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6cshj.mount: Deactivated successfully. Feb 13 07:56:40.242803 systemd[1]: var-lib-kubelet-pods-6137c99d\x2d82ea\x2d4fd4\x2da0b0\x2d93789ace54df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:56:40.242837 systemd[1]: var-lib-kubelet-pods-6137c99d\x2d82ea\x2d4fd4\x2da0b0\x2d93789ace54df-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 07:56:40.242867 systemd[1]: var-lib-kubelet-pods-6137c99d\x2d82ea\x2d4fd4\x2da0b0\x2d93789ace54df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:56:40.565279 kubelet[1834]: E0213 07:56:40.565104 1834 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 07:56:40.581072 systemd[1]: Removed slice kubepods-burstable-pod6137c99d_82ea_4fd4_a0b0_93789ace54df.slice. Feb 13 07:56:40.605118 kubelet[1834]: I0213 07:56:40.605069 1834 topology_manager.go:215] "Topology Admit Handler" podUID="7eb23ece-1007-4632-9acb-4ecfc8fecab7" podNamespace="kube-system" podName="cilium-dxs2t" Feb 13 07:56:40.609269 systemd[1]: Created slice kubepods-burstable-pod7eb23ece_1007_4632_9acb_4ecfc8fecab7.slice. Feb 13 07:56:40.648942 kubelet[1834]: I0213 07:56:40.648866 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-cilium-cgroup\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.649291 kubelet[1834]: I0213 07:56:40.649083 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7eb23ece-1007-4632-9acb-4ecfc8fecab7-hubble-tls\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.649291 kubelet[1834]: I0213 07:56:40.649218 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-hostproc\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.649689 kubelet[1834]: I0213 07:56:40.649335 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-xtables-lock\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.649689 kubelet[1834]: I0213 07:56:40.649444 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7eb23ece-1007-4632-9acb-4ecfc8fecab7-cilium-config-path\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.649689 kubelet[1834]: I0213 07:56:40.649552 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7eb23ece-1007-4632-9acb-4ecfc8fecab7-cilium-ipsec-secrets\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.649689 kubelet[1834]: I0213 07:56:40.649687 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-host-proc-sys-kernel\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650311 kubelet[1834]: I0213 07:56:40.649794 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rcvm\" (UniqueName: \"kubernetes.io/projected/7eb23ece-1007-4632-9acb-4ecfc8fecab7-kube-api-access-8rcvm\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650311 kubelet[1834]: I0213 07:56:40.649902 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-cilium-run\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650311 kubelet[1834]: I0213 07:56:40.650015 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-bpf-maps\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650311 kubelet[1834]: I0213 07:56:40.650107 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-lib-modules\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650311 kubelet[1834]: I0213 07:56:40.650204 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-host-proc-sys-net\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650311 kubelet[1834]: I0213 07:56:40.650299 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-cni-path\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650963 kubelet[1834]: I0213 07:56:40.650386 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7eb23ece-1007-4632-9acb-4ecfc8fecab7-etc-cni-netd\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.650963 kubelet[1834]: I0213 07:56:40.650488 1834 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7eb23ece-1007-4632-9acb-4ecfc8fecab7-clustermesh-secrets\") pod \"cilium-dxs2t\" (UID: \"7eb23ece-1007-4632-9acb-4ecfc8fecab7\") " pod="kube-system/cilium-dxs2t" Feb 13 07:56:40.687233 kubelet[1834]: E0213 07:56:40.687126 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:40.923767 env[1469]: time="2024-02-13T07:56:40.923672259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxs2t,Uid:7eb23ece-1007-4632-9acb-4ecfc8fecab7,Namespace:kube-system,Attempt:0,}" Feb 13 07:56:40.940240 env[1469]: time="2024-02-13T07:56:40.940184445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:56:40.940311 env[1469]: time="2024-02-13T07:56:40.940237377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:56:40.940311 env[1469]: time="2024-02-13T07:56:40.940268220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:56:40.940408 env[1469]: time="2024-02-13T07:56:40.940360879Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350 pid=3449 runtime=io.containerd.runc.v2 Feb 13 07:56:40.946300 systemd[1]: Started cri-containerd-4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350.scope. Feb 13 07:56:40.957724 env[1469]: time="2024-02-13T07:56:40.957699149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxs2t,Uid:7eb23ece-1007-4632-9acb-4ecfc8fecab7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\"" Feb 13 07:56:40.958899 env[1469]: time="2024-02-13T07:56:40.958886237Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:56:40.963718 env[1469]: time="2024-02-13T07:56:40.963671744Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"233b3aaf33b31f06dc4cb96cc7081ff6bccb0317fc8a3844bcf338a917615a35\"" Feb 13 07:56:40.963951 env[1469]: time="2024-02-13T07:56:40.963901315Z" level=info msg="StartContainer for \"233b3aaf33b31f06dc4cb96cc7081ff6bccb0317fc8a3844bcf338a917615a35\"" Feb 13 07:56:40.971580 systemd[1]: Started cri-containerd-233b3aaf33b31f06dc4cb96cc7081ff6bccb0317fc8a3844bcf338a917615a35.scope. Feb 13 07:56:40.985475 env[1469]: time="2024-02-13T07:56:40.985449628Z" level=info msg="StartContainer for \"233b3aaf33b31f06dc4cb96cc7081ff6bccb0317fc8a3844bcf338a917615a35\" returns successfully" Feb 13 07:56:40.991442 systemd[1]: cri-containerd-233b3aaf33b31f06dc4cb96cc7081ff6bccb0317fc8a3844bcf338a917615a35.scope: Deactivated successfully. Feb 13 07:56:41.010992 env[1469]: time="2024-02-13T07:56:41.010954854Z" level=info msg="shim disconnected" id=233b3aaf33b31f06dc4cb96cc7081ff6bccb0317fc8a3844bcf338a917615a35 Feb 13 07:56:41.010992 env[1469]: time="2024-02-13T07:56:41.010991967Z" level=warning msg="cleaning up after shim disconnected" id=233b3aaf33b31f06dc4cb96cc7081ff6bccb0317fc8a3844bcf338a917615a35 namespace=k8s.io Feb 13 07:56:41.011172 env[1469]: time="2024-02-13T07:56:41.011001144Z" level=info msg="cleaning up dead shim" Feb 13 07:56:41.016846 env[1469]: time="2024-02-13T07:56:41.016785872Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:56:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3532 runtime=io.containerd.runc.v2\n" Feb 13 07:56:41.502082 kubelet[1834]: I0213 07:56:41.502023 1834 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6137c99d-82ea-4fd4-a0b0-93789ace54df" path="/var/lib/kubelet/pods/6137c99d-82ea-4fd4-a0b0-93789ace54df/volumes" Feb 13 07:56:41.586772 env[1469]: time="2024-02-13T07:56:41.586674811Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:56:41.600968 env[1469]: time="2024-02-13T07:56:41.600839630Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164\"" Feb 13 07:56:41.601341 env[1469]: time="2024-02-13T07:56:41.601306654Z" level=info msg="StartContainer for \"a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164\"" Feb 13 07:56:41.609521 systemd[1]: Started cri-containerd-a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164.scope. Feb 13 07:56:41.620364 env[1469]: time="2024-02-13T07:56:41.620340950Z" level=info msg="StartContainer for \"a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164\" returns successfully" Feb 13 07:56:41.624193 systemd[1]: cri-containerd-a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164.scope: Deactivated successfully. Feb 13 07:56:41.633518 env[1469]: time="2024-02-13T07:56:41.633491871Z" level=info msg="shim disconnected" id=a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164 Feb 13 07:56:41.633518 env[1469]: time="2024-02-13T07:56:41.633519171Z" level=warning msg="cleaning up after shim disconnected" id=a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164 namespace=k8s.io Feb 13 07:56:41.633638 env[1469]: time="2024-02-13T07:56:41.633524655Z" level=info msg="cleaning up dead shim" Feb 13 07:56:41.637065 env[1469]: time="2024-02-13T07:56:41.637016483Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:56:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3592 runtime=io.containerd.runc.v2\n" Feb 13 07:56:41.687920 kubelet[1834]: E0213 07:56:41.687808 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:42.243460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0e32b9ae2bee110e89af62f77d50d0e799486d9c970a9c317c0a6f9093d4164-rootfs.mount: Deactivated successfully. Feb 13 07:56:42.593024 env[1469]: time="2024-02-13T07:56:42.592794311Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:56:42.622424 env[1469]: time="2024-02-13T07:56:42.622360212Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568\"" Feb 13 07:56:42.622734 env[1469]: time="2024-02-13T07:56:42.622691112Z" level=info msg="StartContainer for \"37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568\"" Feb 13 07:56:42.632185 systemd[1]: Started cri-containerd-37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568.scope. Feb 13 07:56:42.645379 env[1469]: time="2024-02-13T07:56:42.645348177Z" level=info msg="StartContainer for \"37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568\" returns successfully" Feb 13 07:56:42.647049 systemd[1]: cri-containerd-37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568.scope: Deactivated successfully. Feb 13 07:56:42.658407 env[1469]: time="2024-02-13T07:56:42.658378212Z" level=info msg="shim disconnected" id=37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568 Feb 13 07:56:42.658407 env[1469]: time="2024-02-13T07:56:42.658406257Z" level=warning msg="cleaning up after shim disconnected" id=37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568 namespace=k8s.io Feb 13 07:56:42.658531 env[1469]: time="2024-02-13T07:56:42.658412261Z" level=info msg="cleaning up dead shim" Feb 13 07:56:42.662015 env[1469]: time="2024-02-13T07:56:42.661965391Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:56:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3649 runtime=io.containerd.runc.v2\n" Feb 13 07:56:42.689146 kubelet[1834]: E0213 07:56:42.689054 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:43.245147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37425eb4f882a863086fb7394ff51fa196ad53511a0c1dc7e15e15ff97e3e568-rootfs.mount: Deactivated successfully. Feb 13 07:56:43.601004 env[1469]: time="2024-02-13T07:56:43.600780682Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:56:43.615664 env[1469]: time="2024-02-13T07:56:43.615539425Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055\"" Feb 13 07:56:43.616354 env[1469]: time="2024-02-13T07:56:43.616277673Z" level=info msg="StartContainer for \"b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055\"" Feb 13 07:56:43.624980 systemd[1]: Started cri-containerd-b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055.scope. Feb 13 07:56:43.637401 env[1469]: time="2024-02-13T07:56:43.637324649Z" level=info msg="StartContainer for \"b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055\" returns successfully" Feb 13 07:56:43.637861 systemd[1]: cri-containerd-b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055.scope: Deactivated successfully. Feb 13 07:56:43.647184 env[1469]: time="2024-02-13T07:56:43.647130939Z" level=info msg="shim disconnected" id=b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055 Feb 13 07:56:43.647184 env[1469]: time="2024-02-13T07:56:43.647155443Z" level=warning msg="cleaning up after shim disconnected" id=b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055 namespace=k8s.io Feb 13 07:56:43.647184 env[1469]: time="2024-02-13T07:56:43.647161292Z" level=info msg="cleaning up dead shim" Feb 13 07:56:43.650644 env[1469]: time="2024-02-13T07:56:43.650593862Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:56:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3702 runtime=io.containerd.runc.v2\n" Feb 13 07:56:43.690154 kubelet[1834]: E0213 07:56:43.690089 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:43.951959 kubelet[1834]: I0213 07:56:43.951848 1834 setters.go:552] "Node became not ready" node="10.67.80.11" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-13T07:56:43Z","lastTransitionTime":"2024-02-13T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 07:56:44.244247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b69e0e7763c92873e75fdb0d4b34da07fb4d9fa1aef95f6434a2d7e5cbf3f055-rootfs.mount: Deactivated successfully. Feb 13 07:56:44.610322 env[1469]: time="2024-02-13T07:56:44.610089918Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:56:44.628119 env[1469]: time="2024-02-13T07:56:44.627999386Z" level=info msg="CreateContainer within sandbox \"4a4c1a4a795ef52272060d7db8997aaf13bd1006efed0337adb4dc4466266350\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de236f00a9ef550143d593688611ae23d34ee3f331cd29ed468cbe65912283e0\"" Feb 13 07:56:44.628817 env[1469]: time="2024-02-13T07:56:44.628744723Z" level=info msg="StartContainer for \"de236f00a9ef550143d593688611ae23d34ee3f331cd29ed468cbe65912283e0\"" Feb 13 07:56:44.643339 systemd[1]: Started cri-containerd-de236f00a9ef550143d593688611ae23d34ee3f331cd29ed468cbe65912283e0.scope. Feb 13 07:56:44.656387 env[1469]: time="2024-02-13T07:56:44.656362269Z" level=info msg="StartContainer for \"de236f00a9ef550143d593688611ae23d34ee3f331cd29ed468cbe65912283e0\" returns successfully" Feb 13 07:56:44.690545 kubelet[1834]: E0213 07:56:44.690495 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:44.796569 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 07:56:45.649518 kubelet[1834]: I0213 07:56:45.649406 1834 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dxs2t" podStartSLOduration=5.64930586 podCreationTimestamp="2024-02-13 07:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:56:45.649025313 +0000 UTC m=+390.583127348" watchObservedRunningTime="2024-02-13 07:56:45.64930586 +0000 UTC m=+390.583407867" Feb 13 07:56:45.691332 kubelet[1834]: E0213 07:56:45.691229 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:46.692532 kubelet[1834]: E0213 07:56:46.692424 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:47.566824 systemd-networkd[1313]: lxc_health: Link UP Feb 13 07:56:47.591493 systemd-networkd[1313]: lxc_health: Gained carrier Feb 13 07:56:47.591615 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:56:47.692680 kubelet[1834]: E0213 07:56:47.692629 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:48.693330 kubelet[1834]: E0213 07:56:48.693284 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:48.833730 systemd-networkd[1313]: lxc_health: Gained IPv6LL Feb 13 07:56:49.693903 kubelet[1834]: E0213 07:56:49.693844 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:50.694663 kubelet[1834]: E0213 07:56:50.694593 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:51.695487 kubelet[1834]: E0213 07:56:51.695374 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:52.696627 kubelet[1834]: E0213 07:56:52.696523 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:53.526062 systemd[1]: Started sshd@31-147.75.90.7:22-129.226.4.248:49812.service. Feb 13 07:56:53.697506 kubelet[1834]: E0213 07:56:53.697402 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:54.643669 sshd[4510]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=129.226.4.248 user=root Feb 13 07:56:54.698213 kubelet[1834]: E0213 07:56:54.698157 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:55.401134 kubelet[1834]: E0213 07:56:55.401025 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:55.699508 kubelet[1834]: E0213 07:56:55.699278 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:56.700438 kubelet[1834]: E0213 07:56:56.700328 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:56.816432 sshd[4510]: Failed password for root from 129.226.4.248 port 49812 ssh2 Feb 13 07:56:57.255701 systemd[1]: Started sshd@32-147.75.90.7:22-103.147.242.96:54841.service. Feb 13 07:56:57.701235 kubelet[1834]: E0213 07:56:57.701125 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:57.925721 sshd[4510]: Received disconnect from 129.226.4.248 port 49812:11: Bye Bye [preauth] Feb 13 07:56:57.925721 sshd[4510]: Disconnected from authenticating user root 129.226.4.248 port 49812 [preauth] Feb 13 07:56:57.928173 systemd[1]: sshd@31-147.75.90.7:22-129.226.4.248:49812.service: Deactivated successfully. Feb 13 07:56:58.451013 sshd[4568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.147.242.96 user=root Feb 13 07:56:58.702297 kubelet[1834]: E0213 07:56:58.702070 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:56:59.702942 kubelet[1834]: E0213 07:56:59.702867 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:00.703867 kubelet[1834]: E0213 07:57:00.703746 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:00.839418 sshd[4568]: Failed password for root from 103.147.242.96 port 54841 ssh2 Feb 13 07:57:01.705005 kubelet[1834]: E0213 07:57:01.704885 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:01.749091 sshd[4568]: Received disconnect from 103.147.242.96 port 54841:11: Bye Bye [preauth] Feb 13 07:57:01.749091 sshd[4568]: Disconnected from authenticating user root 103.147.242.96 port 54841 [preauth] Feb 13 07:57:01.751692 systemd[1]: sshd@32-147.75.90.7:22-103.147.242.96:54841.service: Deactivated successfully. Feb 13 07:57:02.706018 kubelet[1834]: E0213 07:57:02.705913 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:03.236672 systemd[1]: Started sshd@33-147.75.90.7:22-184.168.31.172:58268.service. Feb 13 07:57:03.428146 sshd[4657]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=184.168.31.172 user=root Feb 13 07:57:03.706693 kubelet[1834]: E0213 07:57:03.706580 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:04.707836 kubelet[1834]: E0213 07:57:04.707716 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:05.169256 sshd[4657]: Failed password for root from 184.168.31.172 port 58268 ssh2 Feb 13 07:57:05.708249 kubelet[1834]: E0213 07:57:05.708141 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:06.525353 sshd[4657]: Received disconnect from 184.168.31.172 port 58268:11: Bye Bye [preauth] Feb 13 07:57:06.525353 sshd[4657]: Disconnected from authenticating user root 184.168.31.172 port 58268 [preauth] Feb 13 07:57:06.527922 systemd[1]: sshd@33-147.75.90.7:22-184.168.31.172:58268.service: Deactivated successfully. Feb 13 07:57:06.708852 kubelet[1834]: E0213 07:57:06.708749 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:07.710115 kubelet[1834]: E0213 07:57:07.710004 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:08.710588 kubelet[1834]: E0213 07:57:08.710460 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:09.711937 kubelet[1834]: E0213 07:57:09.711822 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:10.712122 kubelet[1834]: E0213 07:57:10.712009 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:11.712490 kubelet[1834]: E0213 07:57:11.712380 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:12.713758 kubelet[1834]: E0213 07:57:12.713630 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:13.714473 kubelet[1834]: E0213 07:57:13.714346 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:14.715331 kubelet[1834]: E0213 07:57:14.715217 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:15.401355 kubelet[1834]: E0213 07:57:15.401234 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:15.437244 env[1469]: time="2024-02-13T07:57:15.437086511Z" level=info msg="StopPodSandbox for \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\"" Feb 13 07:57:15.438105 env[1469]: time="2024-02-13T07:57:15.437320494Z" level=info msg="TearDown network for sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" successfully" Feb 13 07:57:15.438105 env[1469]: time="2024-02-13T07:57:15.437417243Z" level=info msg="StopPodSandbox for \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" returns successfully" Feb 13 07:57:15.438347 env[1469]: time="2024-02-13T07:57:15.438276017Z" level=info msg="RemovePodSandbox for \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\"" Feb 13 07:57:15.438467 env[1469]: time="2024-02-13T07:57:15.438358397Z" level=info msg="Forcibly stopping sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\"" Feb 13 07:57:15.438668 env[1469]: time="2024-02-13T07:57:15.438578496Z" level=info msg="TearDown network for sandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" successfully" Feb 13 07:57:15.459694 env[1469]: time="2024-02-13T07:57:15.459616909Z" level=info msg="RemovePodSandbox \"6bab1890157f866293f3d75f80dc5691cf67c1f5473299db923e13f912e48d79\" returns successfully" Feb 13 07:57:15.716482 kubelet[1834]: E0213 07:57:15.716259 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:16.716674 kubelet[1834]: E0213 07:57:16.716534 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:17.716882 kubelet[1834]: E0213 07:57:17.716764 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:18.089999 systemd[1]: Started sshd@34-147.75.90.7:22-128.199.168.119:42930.service. Feb 13 07:57:18.717239 kubelet[1834]: E0213 07:57:18.717194 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:19.096407 sshd[4860]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=128.199.168.119 user=root Feb 13 07:57:19.717965 kubelet[1834]: E0213 07:57:19.717864 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:20.718115 kubelet[1834]: E0213 07:57:20.718013 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:21.700740 sshd[4860]: Failed password for root from 128.199.168.119 port 42930 ssh2 Feb 13 07:57:21.718612 kubelet[1834]: E0213 07:57:21.718522 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:22.355740 sshd[4860]: Received disconnect from 128.199.168.119 port 42930:11: Bye Bye [preauth] Feb 13 07:57:22.355740 sshd[4860]: Disconnected from authenticating user root 128.199.168.119 port 42930 [preauth] Feb 13 07:57:22.358537 systemd[1]: sshd@34-147.75.90.7:22-128.199.168.119:42930.service: Deactivated successfully. Feb 13 07:57:22.719619 kubelet[1834]: E0213 07:57:22.719470 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:23.719754 kubelet[1834]: E0213 07:57:23.719643 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:24.720621 kubelet[1834]: E0213 07:57:24.720517 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:25.720855 kubelet[1834]: E0213 07:57:25.720759 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:26.721886 kubelet[1834]: E0213 07:57:26.721691 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:27.721955 kubelet[1834]: E0213 07:57:27.721887 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:28.722777 kubelet[1834]: E0213 07:57:28.722696 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:29.723063 kubelet[1834]: E0213 07:57:29.722950 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:30.724142 kubelet[1834]: E0213 07:57:30.724028 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:31.725402 kubelet[1834]: E0213 07:57:31.725289 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:32.725934 kubelet[1834]: E0213 07:57:32.725822 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:33.726959 kubelet[1834]: E0213 07:57:33.726884 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:34.727343 kubelet[1834]: E0213 07:57:34.727233 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:35.400937 kubelet[1834]: E0213 07:57:35.400827 1834 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:35.728314 kubelet[1834]: E0213 07:57:35.728116 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:36.728966 kubelet[1834]: E0213 07:57:36.728846 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:37.729873 kubelet[1834]: E0213 07:57:37.729754 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:38.730810 kubelet[1834]: E0213 07:57:38.730728 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:39.731345 kubelet[1834]: E0213 07:57:39.731273 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:57:40.731873 kubelet[1834]: E0213 07:57:40.731761 1834 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"