Feb 13 07:15:28.549644 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 07:15:28.549656 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:15:28.549662 kernel: BIOS-provided physical RAM map: Feb 13 07:15:28.549666 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 07:15:28.549670 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 07:15:28.549673 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 07:15:28.549678 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 07:15:28.549682 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 07:15:28.549685 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000082589fff] usable Feb 13 07:15:28.549689 kernel: BIOS-e820: [mem 0x000000008258a000-0x000000008258afff] ACPI NVS Feb 13 07:15:28.549694 kernel: BIOS-e820: [mem 0x000000008258b000-0x000000008258bfff] reserved Feb 13 07:15:28.549698 kernel: BIOS-e820: [mem 0x000000008258c000-0x000000008afccfff] usable Feb 13 07:15:28.549701 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 07:15:28.549705 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 07:15:28.549710 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 07:15:28.549715 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 07:15:28.549719 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 07:15:28.549723 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 07:15:28.549727 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 07:15:28.549731 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 07:15:28.549735 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 07:15:28.549739 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 07:15:28.549744 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 07:15:28.549748 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 07:15:28.549752 kernel: NX (Execute Disable) protection: active Feb 13 07:15:28.549756 kernel: SMBIOS 3.2.1 present. Feb 13 07:15:28.549761 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 07:15:28.549765 kernel: tsc: Detected 3400.000 MHz processor Feb 13 07:15:28.549769 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 07:15:28.549773 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 07:15:28.549778 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 07:15:28.549782 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 07:15:28.549786 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 07:15:28.549791 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 07:15:28.549795 kernel: Using GB pages for direct mapping Feb 13 07:15:28.549799 kernel: ACPI: Early table checksum verification disabled Feb 13 07:15:28.549804 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 07:15:28.549808 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 07:15:28.549812 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 07:15:28.549817 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 07:15:28.549823 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 07:15:28.549827 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 07:15:28.549833 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 07:15:28.549837 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 07:15:28.549842 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 07:15:28.549846 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 07:15:28.549851 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 07:15:28.549856 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 07:15:28.549860 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 07:15:28.549865 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:15:28.549870 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 07:15:28.549875 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 07:15:28.549879 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:15:28.549884 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:15:28.549888 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 07:15:28.549893 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 07:15:28.549898 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:15:28.549902 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:15:28.549908 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 07:15:28.549912 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 07:15:28.549917 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 07:15:28.549921 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 07:15:28.549926 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 07:15:28.549931 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 07:15:28.549935 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 07:15:28.549940 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 07:15:28.549945 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 07:15:28.549950 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 07:15:28.549954 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 07:15:28.549959 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 07:15:28.549964 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 07:15:28.549968 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 07:15:28.549973 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 07:15:28.549977 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 07:15:28.549982 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 07:15:28.549986 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 07:15:28.549992 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 07:15:28.549996 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 07:15:28.550001 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 07:15:28.550005 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 07:15:28.550010 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 07:15:28.550014 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 07:15:28.550019 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 07:15:28.550023 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 07:15:28.550028 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 07:15:28.550033 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 07:15:28.550038 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 07:15:28.550043 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 07:15:28.550047 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 07:15:28.550052 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 07:15:28.550056 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 07:15:28.550061 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 07:15:28.550065 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 07:15:28.550070 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 07:15:28.550075 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 07:15:28.550079 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 07:15:28.550084 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 07:15:28.550088 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 07:15:28.550093 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 07:15:28.550098 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 07:15:28.550102 kernel: No NUMA configuration found Feb 13 07:15:28.550107 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 07:15:28.550111 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 07:15:28.550117 kernel: Zone ranges: Feb 13 07:15:28.550121 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 07:15:28.550126 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 07:15:28.550130 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 07:15:28.550135 kernel: Movable zone start for each node Feb 13 07:15:28.550139 kernel: Early memory node ranges Feb 13 07:15:28.550144 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 07:15:28.550148 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 07:15:28.550153 kernel: node 0: [mem 0x0000000040400000-0x0000000082589fff] Feb 13 07:15:28.550158 kernel: node 0: [mem 0x000000008258c000-0x000000008afccfff] Feb 13 07:15:28.550163 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 07:15:28.550167 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 07:15:28.550172 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 07:15:28.550176 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 07:15:28.550181 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 07:15:28.550189 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 07:15:28.550194 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 07:15:28.550199 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 07:15:28.550204 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 07:15:28.550210 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 07:15:28.550215 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 07:15:28.550220 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 07:15:28.550225 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 07:15:28.550229 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 07:15:28.550234 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 07:15:28.550239 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 07:15:28.550245 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 07:15:28.550250 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 07:15:28.550254 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 07:15:28.550259 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 07:15:28.550264 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 07:15:28.550269 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 07:15:28.550274 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 07:15:28.550279 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 07:15:28.550283 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 07:15:28.550289 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 07:15:28.550294 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 07:15:28.550299 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 07:15:28.550303 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 07:15:28.550308 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 07:15:28.550313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 07:15:28.550318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 07:15:28.550323 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 07:15:28.550328 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 07:15:28.550333 kernel: TSC deadline timer available Feb 13 07:15:28.550338 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 07:15:28.550343 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 07:15:28.550348 kernel: Booting paravirtualized kernel on bare hardware Feb 13 07:15:28.550353 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 07:15:28.550358 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 07:15:28.550363 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 07:15:28.550368 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 07:15:28.550375 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 07:15:28.550396 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 07:15:28.550401 kernel: Policy zone: Normal Feb 13 07:15:28.550407 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:15:28.550412 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 07:15:28.550417 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 07:15:28.550422 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 07:15:28.550427 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 07:15:28.550445 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 07:15:28.550451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 07:15:28.550456 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 07:15:28.550461 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 07:15:28.550466 kernel: rcu: Hierarchical RCU implementation. Feb 13 07:15:28.550472 kernel: rcu: RCU event tracing is enabled. Feb 13 07:15:28.550477 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 07:15:28.550481 kernel: Rude variant of Tasks RCU enabled. Feb 13 07:15:28.550486 kernel: Tracing variant of Tasks RCU enabled. Feb 13 07:15:28.550491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 07:15:28.550497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 07:15:28.550502 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 07:15:28.550507 kernel: random: crng init done Feb 13 07:15:28.550512 kernel: Console: colour dummy device 80x25 Feb 13 07:15:28.550517 kernel: printk: console [tty0] enabled Feb 13 07:15:28.550522 kernel: printk: console [ttyS1] enabled Feb 13 07:15:28.550526 kernel: ACPI: Core revision 20210730 Feb 13 07:15:28.550531 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 07:15:28.550536 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 07:15:28.550542 kernel: DMAR: Host address width 39 Feb 13 07:15:28.550547 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 07:15:28.550552 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 07:15:28.550557 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 07:15:28.550562 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 07:15:28.550566 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 07:15:28.550571 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 07:15:28.550576 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 07:15:28.550581 kernel: x2apic enabled Feb 13 07:15:28.550587 kernel: Switched APIC routing to cluster x2apic. Feb 13 07:15:28.550592 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 07:15:28.550597 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 07:15:28.550602 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 07:15:28.550607 kernel: process: using mwait in idle threads Feb 13 07:15:28.550611 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 07:15:28.550616 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 07:15:28.550621 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 07:15:28.550626 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:15:28.550631 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 07:15:28.550636 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 07:15:28.550641 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 07:15:28.550646 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 07:15:28.550651 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 07:15:28.550656 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 07:15:28.550660 kernel: TAA: Mitigation: TSX disabled Feb 13 07:15:28.550665 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 07:15:28.550670 kernel: SRBDS: Mitigation: Microcode Feb 13 07:15:28.550675 kernel: GDS: Vulnerable: No microcode Feb 13 07:15:28.550680 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 07:15:28.550685 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 07:15:28.550690 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 07:15:28.550695 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 07:15:28.550700 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 07:15:28.550704 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 07:15:28.550709 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 07:15:28.550714 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 07:15:28.550719 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 07:15:28.550724 kernel: Freeing SMP alternatives memory: 32K Feb 13 07:15:28.550728 kernel: pid_max: default: 32768 minimum: 301 Feb 13 07:15:28.550733 kernel: LSM: Security Framework initializing Feb 13 07:15:28.550738 kernel: SELinux: Initializing. Feb 13 07:15:28.550744 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:15:28.550748 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:15:28.550753 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 07:15:28.550758 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 07:15:28.550763 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 07:15:28.550768 kernel: ... version: 4 Feb 13 07:15:28.550773 kernel: ... bit width: 48 Feb 13 07:15:28.550778 kernel: ... generic registers: 4 Feb 13 07:15:28.550783 kernel: ... value mask: 0000ffffffffffff Feb 13 07:15:28.550787 kernel: ... max period: 00007fffffffffff Feb 13 07:15:28.550793 kernel: ... fixed-purpose events: 3 Feb 13 07:15:28.550798 kernel: ... event mask: 000000070000000f Feb 13 07:15:28.550803 kernel: signal: max sigframe size: 2032 Feb 13 07:15:28.550808 kernel: rcu: Hierarchical SRCU implementation. Feb 13 07:15:28.550812 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 07:15:28.550817 kernel: smp: Bringing up secondary CPUs ... Feb 13 07:15:28.550822 kernel: x86: Booting SMP configuration: Feb 13 07:15:28.550827 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 07:15:28.550832 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 07:15:28.550838 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 07:15:28.550843 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 07:15:28.550847 kernel: smpboot: Max logical packages: 1 Feb 13 07:15:28.550852 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 07:15:28.550857 kernel: devtmpfs: initialized Feb 13 07:15:28.550862 kernel: x86/mm: Memory block size: 128MB Feb 13 07:15:28.550867 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8258a000-0x8258afff] (4096 bytes) Feb 13 07:15:28.550872 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 07:15:28.550877 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 07:15:28.550882 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 07:15:28.550887 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 07:15:28.550892 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 07:15:28.550897 kernel: audit: initializing netlink subsys (disabled) Feb 13 07:15:28.550902 kernel: audit: type=2000 audit(1707808523.040:1): state=initialized audit_enabled=0 res=1 Feb 13 07:15:28.550907 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 07:15:28.550912 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 07:15:28.550917 kernel: cpuidle: using governor menu Feb 13 07:15:28.550921 kernel: ACPI: bus type PCI registered Feb 13 07:15:28.550927 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 07:15:28.550932 kernel: dca service started, version 1.12.1 Feb 13 07:15:28.550937 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 07:15:28.550942 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 07:15:28.550947 kernel: PCI: Using configuration type 1 for base access Feb 13 07:15:28.550951 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 07:15:28.550956 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 07:15:28.550961 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 07:15:28.550966 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 07:15:28.550972 kernel: ACPI: Added _OSI(Module Device) Feb 13 07:15:28.550977 kernel: ACPI: Added _OSI(Processor Device) Feb 13 07:15:28.550982 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 07:15:28.550987 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 07:15:28.550992 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 07:15:28.550996 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 07:15:28.551001 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 07:15:28.551006 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 07:15:28.551011 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:15:28.551017 kernel: ACPI: SSDT 0xFFFF982580213200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 07:15:28.551022 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 07:15:28.551027 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:15:28.551032 kernel: ACPI: SSDT 0xFFFF982581AE2800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 07:15:28.551036 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:15:28.551041 kernel: ACPI: SSDT 0xFFFF982581A5A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 07:15:28.551046 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:15:28.551051 kernel: ACPI: SSDT 0xFFFF982581A5F000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 07:15:28.551056 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:15:28.551061 kernel: ACPI: SSDT 0xFFFF98258014F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 07:15:28.551066 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:15:28.551071 kernel: ACPI: SSDT 0xFFFF982581AE2000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 07:15:28.551076 kernel: ACPI: Interpreter enabled Feb 13 07:15:28.551081 kernel: ACPI: PM: (supports S0 S5) Feb 13 07:15:28.551086 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 07:15:28.551091 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 07:15:28.551095 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 07:15:28.551100 kernel: HEST: Table parsing has been initialized. Feb 13 07:15:28.551105 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 07:15:28.551111 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 07:15:28.551116 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 07:15:28.551121 kernel: ACPI: PM: Power Resource [USBC] Feb 13 07:15:28.551125 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 07:15:28.551130 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 07:15:28.551135 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 07:15:28.551140 kernel: ACPI: PM: Power Resource [WRST] Feb 13 07:15:28.551145 kernel: ACPI: PM: Power Resource [FN00] Feb 13 07:15:28.551150 kernel: ACPI: PM: Power Resource [FN01] Feb 13 07:15:28.551155 kernel: ACPI: PM: Power Resource [FN02] Feb 13 07:15:28.551160 kernel: ACPI: PM: Power Resource [FN03] Feb 13 07:15:28.551165 kernel: ACPI: PM: Power Resource [FN04] Feb 13 07:15:28.551170 kernel: ACPI: PM: Power Resource [PIN] Feb 13 07:15:28.551174 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 07:15:28.551238 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 07:15:28.551282 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 07:15:28.551324 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 07:15:28.551331 kernel: PCI host bridge to bus 0000:00 Feb 13 07:15:28.551375 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 07:15:28.551448 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 07:15:28.551483 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 07:15:28.551518 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 07:15:28.551552 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 07:15:28.551588 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 07:15:28.551639 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 07:15:28.551688 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 07:15:28.551729 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.551774 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 07:15:28.551815 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 07:15:28.551860 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 07:15:28.551901 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 07:15:28.551944 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 07:15:28.551985 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 07:15:28.552026 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 07:15:28.552071 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 07:15:28.552111 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 07:15:28.552151 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 07:15:28.552195 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 07:15:28.552235 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:15:28.552281 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 07:15:28.552321 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:15:28.552364 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 07:15:28.552429 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 07:15:28.552469 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 07:15:28.552512 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 07:15:28.552552 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 07:15:28.552593 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 07:15:28.552635 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 07:15:28.552678 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 07:15:28.552718 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 07:15:28.552763 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 07:15:28.552804 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 07:15:28.552843 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 07:15:28.552883 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 07:15:28.552923 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 07:15:28.552970 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 07:15:28.553012 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 07:15:28.553053 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 07:15:28.553097 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 07:15:28.553139 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.553184 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 07:15:28.553225 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.553274 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 07:15:28.553315 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.553360 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 07:15:28.553405 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.553450 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 07:15:28.553494 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.553538 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 07:15:28.553579 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:15:28.553625 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 07:15:28.553672 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 07:15:28.553713 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 07:15:28.553754 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 07:15:28.553800 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 07:15:28.553841 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 07:15:28.553889 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 07:15:28.553932 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 07:15:28.553976 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 07:15:28.554018 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 07:15:28.554060 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:15:28.554102 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:15:28.554149 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 07:15:28.554191 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 07:15:28.554234 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 07:15:28.554278 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 07:15:28.554320 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:15:28.554362 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:15:28.554407 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:15:28.554448 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 07:15:28.554488 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:15:28.554529 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 07:15:28.554577 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:15:28.554621 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 07:15:28.554662 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 07:15:28.554704 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 07:15:28.554746 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.554787 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 07:15:28.554828 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:15:28.554869 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 07:15:28.554917 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:15:28.554960 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 07:15:28.555002 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 07:15:28.555043 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 07:15:28.555085 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:15:28.555125 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 07:15:28.555167 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:15:28.555209 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 07:15:28.555251 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 07:15:28.555296 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 07:15:28.555339 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 07:15:28.555384 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 07:15:28.555427 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:15:28.555469 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 07:15:28.555509 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 07:15:28.555552 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:15:28.555600 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 07:15:28.555649 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 07:15:28.555693 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 07:15:28.555738 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 07:15:28.555781 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 07:15:28.555826 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 07:15:28.555872 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 07:15:28.555917 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:15:28.555962 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 07:15:28.556006 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:15:28.556048 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:15:28.556055 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 07:15:28.556061 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 07:15:28.556066 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 07:15:28.556073 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 07:15:28.556078 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 07:15:28.556083 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 07:15:28.556089 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 07:15:28.556094 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 07:15:28.556099 kernel: iommu: Default domain type: Translated Feb 13 07:15:28.556105 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 07:15:28.556148 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 07:15:28.556243 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 07:15:28.556290 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 07:15:28.556298 kernel: vgaarb: loaded Feb 13 07:15:28.556303 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 07:15:28.556309 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 07:15:28.556314 kernel: PTP clock support registered Feb 13 07:15:28.556319 kernel: PCI: Using ACPI for IRQ routing Feb 13 07:15:28.556325 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 07:15:28.556330 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 07:15:28.556335 kernel: e820: reserve RAM buffer [mem 0x8258a000-0x83ffffff] Feb 13 07:15:28.556342 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 07:15:28.556347 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 07:15:28.556352 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 07:15:28.556357 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 07:15:28.556362 kernel: clocksource: Switched to clocksource tsc-early Feb 13 07:15:28.556367 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 07:15:28.556375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 07:15:28.556380 kernel: pnp: PnP ACPI init Feb 13 07:15:28.556423 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 07:15:28.556465 kernel: pnp 00:02: [dma 0 disabled] Feb 13 07:15:28.556505 kernel: pnp 00:03: [dma 0 disabled] Feb 13 07:15:28.556546 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 07:15:28.556583 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 07:15:28.556622 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 07:15:28.556664 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 07:15:28.556701 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 07:15:28.556738 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 07:15:28.556775 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 07:15:28.556811 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 07:15:28.556848 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 07:15:28.556884 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 07:15:28.556924 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 07:15:28.556963 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 07:15:28.557000 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 07:15:28.557036 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 07:15:28.557073 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 07:15:28.557109 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 07:15:28.557144 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 07:15:28.557183 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 07:15:28.557223 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 07:15:28.557230 kernel: pnp: PnP ACPI: found 10 devices Feb 13 07:15:28.557236 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 07:15:28.557241 kernel: NET: Registered PF_INET protocol family Feb 13 07:15:28.557247 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:15:28.557252 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 07:15:28.557258 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 07:15:28.557264 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:15:28.557270 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 07:15:28.557275 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 07:15:28.557281 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:15:28.557286 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:15:28.557292 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 07:15:28.557298 kernel: NET: Registered PF_XDP protocol family Feb 13 07:15:28.557339 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 07:15:28.557382 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 07:15:28.557440 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 07:15:28.557481 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:15:28.557524 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:15:28.557566 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:15:28.557607 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:15:28.557649 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:15:28.557689 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 07:15:28.557730 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:15:28.557772 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 07:15:28.557812 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 07:15:28.557852 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:15:28.557893 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 07:15:28.557934 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 07:15:28.557974 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:15:28.558015 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 07:15:28.558055 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 07:15:28.558097 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 07:15:28.558138 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:15:28.558180 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:15:28.558221 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 07:15:28.558262 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 07:15:28.558303 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:15:28.558340 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 07:15:28.558378 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 07:15:28.558457 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 07:15:28.558493 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 07:15:28.558527 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 07:15:28.558561 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 07:15:28.558602 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 07:15:28.558641 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:15:28.558684 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 07:15:28.558722 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 07:15:28.558762 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 07:15:28.558799 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 07:15:28.558840 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 07:15:28.558879 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 07:15:28.558918 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 07:15:28.558957 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 07:15:28.558964 kernel: PCI: CLS 64 bytes, default 64 Feb 13 07:15:28.558970 kernel: DMAR: No ATSR found Feb 13 07:15:28.558975 kernel: DMAR: No SATC found Feb 13 07:15:28.558980 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 07:15:28.559019 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 07:15:28.559062 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 07:15:28.559102 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 07:15:28.559143 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 07:15:28.559183 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 07:15:28.559223 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 07:15:28.559262 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 07:15:28.559302 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 07:15:28.559342 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 07:15:28.559411 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 07:15:28.559472 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 07:15:28.559512 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 07:15:28.559553 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 07:15:28.559592 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 07:15:28.559632 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 07:15:28.559673 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 07:15:28.559713 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 07:15:28.559755 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 07:15:28.559795 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 07:15:28.559835 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 07:15:28.559875 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 07:15:28.559918 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 07:15:28.559959 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 07:15:28.560000 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 07:15:28.560043 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 07:15:28.560087 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 07:15:28.560131 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 07:15:28.560139 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 07:15:28.560144 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 07:15:28.560150 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 07:15:28.560155 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 07:15:28.560160 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 07:15:28.560165 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 07:15:28.560172 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 07:15:28.560215 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 07:15:28.560222 kernel: Initialise system trusted keyrings Feb 13 07:15:28.560228 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 07:15:28.560233 kernel: Key type asymmetric registered Feb 13 07:15:28.560238 kernel: Asymmetric key parser 'x509' registered Feb 13 07:15:28.560243 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 07:15:28.560248 kernel: io scheduler mq-deadline registered Feb 13 07:15:28.560254 kernel: io scheduler kyber registered Feb 13 07:15:28.560260 kernel: io scheduler bfq registered Feb 13 07:15:28.560300 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 07:15:28.560341 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 07:15:28.560401 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 07:15:28.560462 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 07:15:28.560502 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 07:15:28.560542 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 07:15:28.560589 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 07:15:28.560597 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 07:15:28.560603 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 07:15:28.560608 kernel: pstore: Registered erst as persistent store backend Feb 13 07:15:28.560613 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 07:15:28.560618 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 07:15:28.560624 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 07:15:28.560629 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 07:15:28.560634 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 07:15:28.560678 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 07:15:28.560686 kernel: i8042: PNP: No PS/2 controller found. Feb 13 07:15:28.560722 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 07:15:28.560759 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 07:15:28.560796 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T07:15:27 UTC (1707808527) Feb 13 07:15:28.560832 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 07:15:28.560839 kernel: fail to initialize ptp_kvm Feb 13 07:15:28.560846 kernel: intel_pstate: Intel P-state driver initializing Feb 13 07:15:28.560851 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 07:15:28.560856 kernel: intel_pstate: HWP enabled Feb 13 07:15:28.560862 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 07:15:28.560867 kernel: vesafb: scrolling: redraw Feb 13 07:15:28.560872 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 07:15:28.560877 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000000b635c7, using 768k, total 768k Feb 13 07:15:28.560882 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 07:15:28.560888 kernel: fb0: VESA VGA frame buffer device Feb 13 07:15:28.560894 kernel: NET: Registered PF_INET6 protocol family Feb 13 07:15:28.560899 kernel: Segment Routing with IPv6 Feb 13 07:15:28.560904 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 07:15:28.560909 kernel: NET: Registered PF_PACKET protocol family Feb 13 07:15:28.560914 kernel: Key type dns_resolver registered Feb 13 07:15:28.560920 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 07:15:28.560925 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 07:15:28.560930 kernel: IPI shorthand broadcast: enabled Feb 13 07:15:28.560935 kernel: sched_clock: Marking stable (1678891564, 1339888906)->(4438376202, -1419595732) Feb 13 07:15:28.560942 kernel: registered taskstats version 1 Feb 13 07:15:28.560947 kernel: Loading compiled-in X.509 certificates Feb 13 07:15:28.560952 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 07:15:28.560957 kernel: Key type .fscrypt registered Feb 13 07:15:28.560962 kernel: Key type fscrypt-provisioning registered Feb 13 07:15:28.560968 kernel: pstore: Using crash dump compression: deflate Feb 13 07:15:28.560973 kernel: ima: Allocated hash algorithm: sha1 Feb 13 07:15:28.560978 kernel: ima: No architecture policies found Feb 13 07:15:28.560983 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 07:15:28.560989 kernel: Write protecting the kernel read-only data: 28672k Feb 13 07:15:28.560995 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 07:15:28.561000 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 07:15:28.561005 kernel: Run /init as init process Feb 13 07:15:28.561010 kernel: with arguments: Feb 13 07:15:28.561016 kernel: /init Feb 13 07:15:28.561021 kernel: with environment: Feb 13 07:15:28.561026 kernel: HOME=/ Feb 13 07:15:28.561031 kernel: TERM=linux Feb 13 07:15:28.561036 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 07:15:28.561043 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:15:28.561050 systemd[1]: Detected architecture x86-64. Feb 13 07:15:28.561055 systemd[1]: Running in initrd. Feb 13 07:15:28.561060 systemd[1]: No hostname configured, using default hostname. Feb 13 07:15:28.561066 systemd[1]: Hostname set to . Feb 13 07:15:28.561071 systemd[1]: Initializing machine ID from random generator. Feb 13 07:15:28.561077 systemd[1]: Queued start job for default target initrd.target. Feb 13 07:15:28.561083 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:15:28.561088 systemd[1]: Reached target cryptsetup.target. Feb 13 07:15:28.561093 systemd[1]: Reached target paths.target. Feb 13 07:15:28.561099 systemd[1]: Reached target slices.target. Feb 13 07:15:28.561104 systemd[1]: Reached target swap.target. Feb 13 07:15:28.561109 systemd[1]: Reached target timers.target. Feb 13 07:15:28.561114 systemd[1]: Listening on iscsid.socket. Feb 13 07:15:28.561121 systemd[1]: Listening on iscsiuio.socket. Feb 13 07:15:28.561126 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 07:15:28.561132 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 07:15:28.561137 systemd[1]: Listening on systemd-journald.socket. Feb 13 07:15:28.561142 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 07:15:28.561148 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:15:28.561153 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 07:15:28.561159 kernel: clocksource: Switched to clocksource tsc Feb 13 07:15:28.561165 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:15:28.561170 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:15:28.561176 systemd[1]: Reached target sockets.target. Feb 13 07:15:28.561181 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:15:28.561186 systemd[1]: Finished network-cleanup.service. Feb 13 07:15:28.561192 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 07:15:28.561197 systemd[1]: Starting systemd-journald.service... Feb 13 07:15:28.561203 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:15:28.561210 systemd-journald[268]: Journal started Feb 13 07:15:28.561235 systemd-journald[268]: Runtime Journal (/run/log/journal/c3276af56add4cccbe0c28806671e577) is 8.0M, max 640.1M, 632.1M free. Feb 13 07:15:28.564971 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 07:15:28.647463 kernel: audit: type=1334 audit(1707808528.570:2): prog-id=6 op=LOAD Feb 13 07:15:28.647489 systemd[1]: Starting systemd-resolved.service... Feb 13 07:15:28.647497 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 07:15:28.647504 kernel: Bridge firewalling registered Feb 13 07:15:28.570000 audit: BPF prog-id=6 op=LOAD Feb 13 07:15:28.655551 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 07:15:28.661232 systemd-resolved[271]: Positive Trust Anchors: Feb 13 07:15:28.712474 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 07:15:28.712486 kernel: SCSI subsystem initialized Feb 13 07:15:28.712493 systemd[1]: Started systemd-journald.service. Feb 13 07:15:28.661240 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:15:28.661260 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:15:28.865597 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 07:15:28.865618 kernel: device-mapper: uevent: version 1.0.3 Feb 13 07:15:28.865632 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 07:15:28.865639 kernel: audit: type=1130 audit(1707808528.799:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.662827 systemd-resolved[271]: Defaulting to hostname 'linux'. Feb 13 07:15:28.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.788304 systemd-modules-load[269]: Inserted module 'dm_multipath' Feb 13 07:15:28.966988 kernel: audit: type=1130 audit(1707808528.873:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.967003 kernel: audit: type=1130 audit(1707808528.924:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.799717 systemd[1]: Started systemd-resolved.service. Feb 13 07:15:29.018452 kernel: audit: type=1130 audit(1707808528.975:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.873773 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:15:29.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.924779 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 07:15:29.125265 kernel: audit: type=1130 audit(1707808529.026:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.125277 kernel: audit: type=1130 audit(1707808529.079:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:28.975678 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:15:29.026689 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 07:15:29.079663 systemd[1]: Reached target nss-lookup.target. Feb 13 07:15:29.133981 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 07:15:29.143281 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:15:29.143680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:15:29.146643 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:15:29.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.147247 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:15:29.195575 kernel: audit: type=1130 audit(1707808529.145:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.207733 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 07:15:29.273471 kernel: audit: type=1130 audit(1707808529.207:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.265036 systemd[1]: Starting dracut-cmdline.service... Feb 13 07:15:29.288487 dracut-cmdline[292]: dracut-dracut-053 Feb 13 07:15:29.288487 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 07:15:29.288487 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:15:29.357451 kernel: Loading iSCSI transport class v2.0-870. Feb 13 07:15:29.357464 kernel: iscsi: registered transport (tcp) Feb 13 07:15:29.403036 kernel: iscsi: registered transport (qla4xxx) Feb 13 07:15:29.403054 kernel: QLogic iSCSI HBA Driver Feb 13 07:15:29.419095 systemd[1]: Finished dracut-cmdline.service. Feb 13 07:15:29.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:29.429147 systemd[1]: Starting dracut-pre-udev.service... Feb 13 07:15:29.484377 kernel: raid6: avx2x4 gen() 46772 MB/s Feb 13 07:15:29.520448 kernel: raid6: avx2x4 xor() 20698 MB/s Feb 13 07:15:29.555445 kernel: raid6: avx2x2 gen() 52429 MB/s Feb 13 07:15:29.590442 kernel: raid6: avx2x2 xor() 32135 MB/s Feb 13 07:15:29.625407 kernel: raid6: avx2x1 gen() 45275 MB/s Feb 13 07:15:29.659377 kernel: raid6: avx2x1 xor() 27917 MB/s Feb 13 07:15:29.693411 kernel: raid6: sse2x4 gen() 21366 MB/s Feb 13 07:15:29.727445 kernel: raid6: sse2x4 xor() 11972 MB/s Feb 13 07:15:29.761445 kernel: raid6: sse2x2 gen() 21676 MB/s Feb 13 07:15:29.795445 kernel: raid6: sse2x2 xor() 13450 MB/s Feb 13 07:15:29.829406 kernel: raid6: sse2x1 gen() 18304 MB/s Feb 13 07:15:29.880695 kernel: raid6: sse2x1 xor() 8926 MB/s Feb 13 07:15:29.880709 kernel: raid6: using algorithm avx2x2 gen() 52429 MB/s Feb 13 07:15:29.880717 kernel: raid6: .... xor() 32135 MB/s, rmw enabled Feb 13 07:15:29.898591 kernel: raid6: using avx2x2 recovery algorithm Feb 13 07:15:29.944430 kernel: xor: automatically using best checksumming function avx Feb 13 07:15:30.022407 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 07:15:30.027734 systemd[1]: Finished dracut-pre-udev.service. Feb 13 07:15:30.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:30.037000 audit: BPF prog-id=7 op=LOAD Feb 13 07:15:30.037000 audit: BPF prog-id=8 op=LOAD Feb 13 07:15:30.038254 systemd[1]: Starting systemd-udevd.service... Feb 13 07:15:30.045995 systemd-udevd[473]: Using default interface naming scheme 'v252'. Feb 13 07:15:30.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:30.053676 systemd[1]: Started systemd-udevd.service. Feb 13 07:15:30.096507 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Feb 13 07:15:30.070026 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 07:15:30.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:30.100376 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 07:15:30.115796 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:15:30.166161 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:15:30.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:30.194389 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 07:15:30.204402 kernel: libata version 3.00 loaded. Feb 13 07:15:30.204449 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 07:15:30.238355 kernel: AES CTR mode by8 optimization enabled Feb 13 07:15:30.238398 kernel: ACPI: bus type USB registered Feb 13 07:15:30.255376 kernel: usbcore: registered new interface driver usbfs Feb 13 07:15:30.290009 kernel: usbcore: registered new interface driver hub Feb 13 07:15:30.290024 kernel: usbcore: registered new device driver usb Feb 13 07:15:30.342911 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 07:15:30.342936 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 07:15:30.344379 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 07:15:30.354430 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 07:15:30.354508 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:15:30.354563 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 07:15:30.354613 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 07:15:30.379377 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:15:30.379486 kernel: scsi host0: ahci Feb 13 07:15:30.379582 kernel: scsi host1: ahci Feb 13 07:15:30.379659 kernel: scsi host2: ahci Feb 13 07:15:30.379715 kernel: scsi host3: ahci Feb 13 07:15:30.379773 kernel: scsi host4: ahci Feb 13 07:15:30.379849 kernel: scsi host5: ahci Feb 13 07:15:30.379909 kernel: scsi host6: ahci Feb 13 07:15:30.379960 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Feb 13 07:15:30.379968 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Feb 13 07:15:30.379974 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Feb 13 07:15:30.379980 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Feb 13 07:15:30.379989 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Feb 13 07:15:30.379997 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Feb 13 07:15:30.380006 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Feb 13 07:15:30.384384 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 07:15:30.384454 kernel: pps pps0: new PPS source ptp0 Feb 13 07:15:30.384517 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 07:15:30.384573 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:15:30.384624 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b6 Feb 13 07:15:30.384672 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 07:15:30.384722 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:15:30.440431 kernel: pps pps1: new PPS source ptp1 Feb 13 07:15:30.466159 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 07:15:30.466231 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 07:15:30.490651 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:15:30.490719 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:15:30.490772 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 07:15:30.502197 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b7 Feb 13 07:15:30.513295 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 07:15:30.539184 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 07:15:30.553944 kernel: hub 1-0:1.0: USB hub found Feb 13 07:15:30.554048 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:15:30.680445 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:15:30.680519 kernel: hub 1-0:1.0: 16 ports detected Feb 13 07:15:30.692380 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 07:15:30.695380 kernel: hub 2-0:1.0: USB hub found Feb 13 07:15:30.695573 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:15:30.706407 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 07:15:30.731722 kernel: hub 2-0:1.0: 10 ports detected Feb 13 07:15:30.731797 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 07:15:30.756417 kernel: usb: port power management may be unreliable Feb 13 07:15:30.756432 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 07:15:30.897432 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:15:30.897503 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:15:30.925417 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 07:15:30.925488 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 07:15:30.934445 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 07:15:30.951216 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:15:30.951285 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 07:15:31.073401 kernel: hub 1-14:1.0: USB hub found Feb 13 07:15:31.073500 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:15:31.107461 kernel: hub 1-14:1.0: 4 ports detected Feb 13 07:15:31.121054 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 07:15:31.200640 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:15:31.200656 kernel: ata1.00: Features: NCQ-prio Feb 13 07:15:31.233040 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:15:31.233055 kernel: ata2.00: Features: NCQ-prio Feb 13 07:15:31.251425 kernel: ata1.00: configured for UDMA/133 Feb 13 07:15:31.251443 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:15:31.251507 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 07:15:31.271376 kernel: ata2.00: configured for UDMA/133 Feb 13 07:15:31.275375 kernel: port_module: 9 callbacks suppressed Feb 13 07:15:31.275390 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 07:15:31.290397 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:15:31.360434 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 07:15:31.397380 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 07:15:31.409429 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:15:31.409445 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 07:15:31.409566 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 07:15:31.419092 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:15:31.419125 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:15:31.419251 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:15:31.419313 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 07:15:31.419369 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 07:15:31.419483 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 07:15:31.419570 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:15:31.419649 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:15:31.419657 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:15:31.419663 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 07:15:31.507335 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:15:31.507546 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 07:15:31.507712 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 07:15:31.626416 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 07:15:31.626467 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 07:15:31.674387 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:15:31.711119 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:15:31.746476 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 07:15:31.746507 kernel: GPT:9289727 != 937703087 Feb 13 07:15:31.746521 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 07:15:31.763287 kernel: GPT:9289727 != 937703087 Feb 13 07:15:31.777465 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 07:15:31.808738 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:15:31.809429 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:15:31.824326 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 07:15:31.858379 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 07:15:31.888757 kernel: usbcore: registered new interface driver usbhid Feb 13 07:15:31.888797 kernel: usbhid: USB HID core driver Feb 13 07:15:31.895264 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 07:15:31.960644 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 07:15:31.960766 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (554) Feb 13 07:15:31.960775 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 07:15:31.929068 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 07:15:31.972899 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 07:15:31.991604 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 07:15:32.035414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:15:32.140485 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 07:15:32.140571 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 07:15:32.140580 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 07:15:32.140641 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:15:32.128710 systemd[1]: Starting disk-uuid.service... Feb 13 07:15:32.181497 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:15:32.181508 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:15:32.181549 disk-uuid[690]: Primary Header is updated. Feb 13 07:15:32.181549 disk-uuid[690]: Secondary Entries is updated. Feb 13 07:15:32.181549 disk-uuid[690]: Secondary Header is updated. Feb 13 07:15:32.239450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:15:32.239461 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:15:32.239468 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:15:33.225640 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:15:33.244409 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:15:33.244830 disk-uuid[691]: The operation has completed successfully. Feb 13 07:15:33.285432 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 07:15:33.380436 kernel: audit: type=1130 audit(1707808533.292:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.380451 kernel: audit: type=1131 audit(1707808533.292:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.285477 systemd[1]: Finished disk-uuid.service. Feb 13 07:15:33.409457 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 07:15:33.298219 systemd[1]: Starting verity-setup.service... Feb 13 07:15:33.446179 systemd[1]: Found device dev-mapper-usr.device. Feb 13 07:15:33.455830 systemd[1]: Mounting sysusr-usr.mount... Feb 13 07:15:33.468566 systemd[1]: Finished verity-setup.service. Feb 13 07:15:33.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.530378 kernel: audit: type=1130 audit(1707808533.482:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.557415 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 07:15:33.557731 systemd[1]: Mounted sysusr-usr.mount. Feb 13 07:15:33.564669 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 07:15:33.565054 systemd[1]: Starting ignition-setup.service... Feb 13 07:15:33.668504 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:15:33.668518 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:15:33.668525 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:15:33.668531 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:15:33.661872 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 07:15:33.677763 systemd[1]: Finished ignition-setup.service. Feb 13 07:15:33.743416 kernel: audit: type=1130 audit(1707808533.694:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.695932 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 07:15:33.751585 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 07:15:33.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.808000 audit: BPF prog-id=9 op=LOAD Feb 13 07:15:33.810452 systemd[1]: Starting systemd-networkd.service... Feb 13 07:15:33.831413 kernel: audit: type=1130 audit(1707808533.758:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.831433 kernel: audit: type=1334 audit(1707808533.808:24): prog-id=9 op=LOAD Feb 13 07:15:33.821237 ignition[865]: Ignition 2.14.0 Feb 13 07:15:33.821241 ignition[865]: Stage: fetch-offline Feb 13 07:15:33.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.909390 kernel: audit: type=1130 audit(1707808533.857:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.840059 unknown[865]: fetched base config from "system" Feb 13 07:15:33.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.821265 ignition[865]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:33.986479 kernel: audit: type=1130 audit(1707808533.917:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.840063 unknown[865]: fetched user config from "system" Feb 13 07:15:33.821278 ignition[865]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:33.844645 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 07:15:34.038487 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:15:34.038570 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 13 07:15:33.824269 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:34.105565 kernel: audit: type=1130 audit(1707808534.053:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:34.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.844862 systemd-networkd[879]: lo: Link UP Feb 13 07:15:33.824331 ignition[865]: parsed url from cmdline: "" Feb 13 07:15:33.844864 systemd-networkd[879]: lo: Gained carrier Feb 13 07:15:34.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.824333 ignition[865]: no config URL provided Feb 13 07:15:34.149553 iscsid[907]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:15:34.149553 iscsid[907]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 07:15:34.149553 iscsid[907]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 07:15:34.149553 iscsid[907]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 07:15:34.149553 iscsid[907]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 07:15:34.149553 iscsid[907]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:15:34.149553 iscsid[907]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 07:15:34.311438 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 07:15:34.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.845205 systemd-networkd[879]: Enumeration completed Feb 13 07:15:34.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:33.824336 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 07:15:33.846018 systemd-networkd[879]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:15:33.828755 ignition[865]: parsing config with SHA512: 3c6d070b130ac604ed962124ab47dbd219c87babd9e7efd0377f083847af7bd5ade8a74cd64101cfc7445b76dcd2995acbbb2ab388936b03945da6c27df6c626 Feb 13 07:15:33.859459 systemd[1]: Started systemd-networkd.service. Feb 13 07:15:33.840322 ignition[865]: fetch-offline: fetch-offline passed Feb 13 07:15:33.917645 systemd[1]: Reached target network.target. Feb 13 07:15:33.840324 ignition[865]: POST message to Packet Timeline Feb 13 07:15:33.976584 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 07:15:33.840329 ignition[865]: POST Status error: resource requires networking Feb 13 07:15:33.977014 systemd[1]: Starting ignition-kargs.service... Feb 13 07:15:33.840358 ignition[865]: Ignition finished successfully Feb 13 07:15:33.993870 systemd[1]: Starting iscsiuio.service... Feb 13 07:15:33.981781 ignition[892]: Ignition 2.14.0 Feb 13 07:15:34.012529 systemd[1]: Started iscsiuio.service. Feb 13 07:15:33.981785 ignition[892]: Stage: kargs Feb 13 07:15:34.023562 systemd-networkd[879]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:15:33.981841 ignition[892]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:34.054094 systemd[1]: Starting iscsid.service... Feb 13 07:15:33.981851 ignition[892]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:34.112635 systemd[1]: Started iscsid.service. Feb 13 07:15:33.983213 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:34.128175 systemd[1]: Starting dracut-initqueue.service... Feb 13 07:15:33.984684 ignition[892]: kargs: kargs passed Feb 13 07:15:34.142567 systemd[1]: Finished dracut-initqueue.service. Feb 13 07:15:33.984687 ignition[892]: POST message to Packet Timeline Feb 13 07:15:34.157666 systemd[1]: Reached target remote-fs-pre.target. Feb 13 07:15:33.984697 ignition[892]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:34.180491 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:15:33.987668 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51976->[::1]:53: read: connection refused Feb 13 07:15:34.214469 systemd[1]: Reached target remote-fs.target. Feb 13 07:15:34.188307 ignition[892]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 07:15:34.254226 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:15:34.188624 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58625->[::1]:53: read: connection refused Feb 13 07:15:34.273019 systemd[1]: Starting dracut-pre-mount.service... Feb 13 07:15:34.589090 ignition[892]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 07:15:34.282971 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:15:34.590195 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60180->[::1]:53: read: connection refused Feb 13 07:15:34.300675 systemd[1]: Finished dracut-pre-mount.service. Feb 13 07:15:34.312255 systemd-networkd[879]: enp1s0f1np1: Link UP Feb 13 07:15:34.312356 systemd-networkd[879]: enp1s0f1np1: Gained carrier Feb 13 07:15:34.329612 systemd-networkd[879]: enp1s0f0np0: Link UP Feb 13 07:15:34.329734 systemd-networkd[879]: eno2: Link UP Feb 13 07:15:34.329834 systemd-networkd[879]: eno1: Link UP Feb 13 07:15:35.059173 systemd-networkd[879]: enp1s0f0np0: Gained carrier Feb 13 07:15:35.067487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 13 07:15:35.100598 systemd-networkd[879]: enp1s0f0np0: DHCPv4 address 139.178.90.101/31, gateway 139.178.90.100 acquired from 145.40.83.140 Feb 13 07:15:35.390626 ignition[892]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 07:15:35.391948 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33080->[::1]:53: read: connection refused Feb 13 07:15:35.483660 systemd-networkd[879]: enp1s0f1np1: Gained IPv6LL Feb 13 07:15:36.315968 systemd-networkd[879]: enp1s0f0np0: Gained IPv6LL Feb 13 07:15:36.993728 ignition[892]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 07:15:36.994949 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46001->[::1]:53: read: connection refused Feb 13 07:15:40.198405 ignition[892]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 07:15:40.238962 ignition[892]: GET result: OK Feb 13 07:15:40.446330 ignition[892]: Ignition finished successfully Feb 13 07:15:40.450666 systemd[1]: Finished ignition-kargs.service. Feb 13 07:15:40.539439 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 07:15:40.539469 kernel: audit: type=1130 audit(1707808540.461:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:40.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:40.470273 ignition[924]: Ignition 2.14.0 Feb 13 07:15:40.463616 systemd[1]: Starting ignition-disks.service... Feb 13 07:15:40.470277 ignition[924]: Stage: disks Feb 13 07:15:40.470332 ignition[924]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:40.470341 ignition[924]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:40.471824 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:40.473400 ignition[924]: disks: disks passed Feb 13 07:15:40.473403 ignition[924]: POST message to Packet Timeline Feb 13 07:15:40.473414 ignition[924]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:40.508113 ignition[924]: GET result: OK Feb 13 07:15:40.689427 ignition[924]: Ignition finished successfully Feb 13 07:15:40.692532 systemd[1]: Finished ignition-disks.service. Feb 13 07:15:40.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:40.706929 systemd[1]: Reached target initrd-root-device.target. Feb 13 07:15:40.794571 kernel: audit: type=1130 audit(1707808540.706:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:40.779563 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:15:40.779602 systemd[1]: Reached target local-fs.target. Feb 13 07:15:40.802587 systemd[1]: Reached target sysinit.target. Feb 13 07:15:40.817591 systemd[1]: Reached target basic.target. Feb 13 07:15:40.831368 systemd[1]: Starting systemd-fsck-root.service... Feb 13 07:15:40.854645 systemd-fsck[941]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 07:15:40.865960 systemd[1]: Finished systemd-fsck-root.service. Feb 13 07:15:40.957963 kernel: audit: type=1130 audit(1707808540.873:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:40.957978 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 07:15:40.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:40.876405 systemd[1]: Mounting sysroot.mount... Feb 13 07:15:40.966058 systemd[1]: Mounted sysroot.mount. Feb 13 07:15:40.982643 systemd[1]: Reached target initrd-root-fs.target. Feb 13 07:15:40.989266 systemd[1]: Mounting sysroot-usr.mount... Feb 13 07:15:41.014225 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 07:15:41.022942 systemd[1]: Starting flatcar-static-network.service... Feb 13 07:15:41.038619 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 07:15:41.038711 systemd[1]: Reached target ignition-diskful.target. Feb 13 07:15:41.058445 systemd[1]: Mounted sysroot-usr.mount. Feb 13 07:15:41.080658 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:15:41.214844 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (952) Feb 13 07:15:41.214860 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:15:41.214869 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:15:41.214880 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:15:41.214887 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:15:41.092941 systemd[1]: Starting initrd-setup-root.service... Feb 13 07:15:41.275656 kernel: audit: type=1130 audit(1707808541.222:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.275767 coreos-metadata[948]: Feb 13 07:15:41.140 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:41.275767 coreos-metadata[948]: Feb 13 07:15:41.162 INFO Fetch successful Feb 13 07:15:41.275767 coreos-metadata[948]: Feb 13 07:15:41.180 INFO wrote hostname ci-3510.3.2-a-596fb49211 to /sysroot/etc/hostname Feb 13 07:15:41.482631 kernel: audit: type=1130 audit(1707808541.284:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.482645 kernel: audit: type=1130 audit(1707808541.347:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.482653 kernel: audit: type=1131 audit(1707808541.347:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.482790 coreos-metadata[949]: Feb 13 07:15:41.140 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:41.482790 coreos-metadata[949]: Feb 13 07:15:41.163 INFO Fetch successful Feb 13 07:15:41.517481 initrd-setup-root[959]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 07:15:41.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.137632 systemd[1]: Finished initrd-setup-root.service. Feb 13 07:15:41.589608 kernel: audit: type=1130 audit(1707808541.524:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.589660 initrd-setup-root[967]: cut: /sysroot/etc/group: No such file or directory Feb 13 07:15:41.223708 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 07:15:41.610649 initrd-setup-root[975]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 07:15:41.284687 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 07:15:41.632653 initrd-setup-root[983]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 07:15:41.284725 systemd[1]: Finished flatcar-static-network.service. Feb 13 07:15:41.650650 ignition[1023]: INFO : Ignition 2.14.0 Feb 13 07:15:41.650650 ignition[1023]: INFO : Stage: mount Feb 13 07:15:41.650650 ignition[1023]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:41.650650 ignition[1023]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:41.650650 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:41.650650 ignition[1023]: INFO : mount: mount passed Feb 13 07:15:41.650650 ignition[1023]: INFO : POST message to Packet Timeline Feb 13 07:15:41.650650 ignition[1023]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:41.650650 ignition[1023]: INFO : GET result: OK Feb 13 07:15:41.347627 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:15:41.468961 systemd[1]: Starting ignition-mount.service... Feb 13 07:15:41.489931 systemd[1]: Starting sysroot-boot.service... Feb 13 07:15:41.509995 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 07:15:41.510047 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 07:15:41.783641 ignition[1023]: INFO : Ignition finished successfully Feb 13 07:15:41.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.850468 kernel: audit: type=1130 audit(1707808541.790:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:41.513474 systemd[1]: Finished sysroot-boot.service. Feb 13 07:15:41.775503 systemd[1]: Finished ignition-mount.service. Feb 13 07:15:41.793466 systemd[1]: Starting ignition-files.service... Feb 13 07:15:41.957458 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1040) Feb 13 07:15:41.957469 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:15:41.957477 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:15:41.957483 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:15:41.957490 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:15:41.859127 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:15:41.993414 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:15:42.018542 ignition[1059]: INFO : Ignition 2.14.0 Feb 13 07:15:42.018542 ignition[1059]: INFO : Stage: files Feb 13 07:15:42.032620 ignition[1059]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:42.032620 ignition[1059]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:42.032620 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:42.032620 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 13 07:15:42.032620 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 07:15:42.032620 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 07:15:42.032620 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 07:15:42.032620 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 07:15:42.032620 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 07:15:42.032620 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 07:15:42.032620 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 13 07:15:42.025344 unknown[1059]: wrote ssh authorized keys file for user: core Feb 13 07:15:42.600214 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 07:15:42.747117 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 13 07:15:42.747117 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 07:15:42.790593 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 07:15:42.790593 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 13 07:15:43.173749 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 07:15:43.244772 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 13 07:15:43.269627 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 07:15:43.269627 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:15:43.269627 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 13 07:15:43.347341 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 07:15:45.137514 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 13 07:15:45.162698 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:15:45.162698 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:15:45.162698 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 13 07:15:45.211597 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 07:15:50.895067 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2423141572" Feb 13 07:15:50.929466 ignition[1059]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2423141572": device or resource busy Feb 13 07:15:50.929466 ignition[1059]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2423141572", trying btrfs: device or resource busy Feb 13 07:15:50.929466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2423141572" Feb 13 07:15:51.155657 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1081) Feb 13 07:15:51.155758 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2423141572" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2423141572" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2423141572" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(f): [started] processing unit "packet-phone-home.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(f): [finished] processing unit "packet-phone-home.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 07:15:51.155758 ignition[1059]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:15:51.669590 kernel: audit: type=1130 audit(1707808551.334:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.669608 kernel: audit: type=1130 audit(1707808551.463:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.669616 kernel: audit: type=1130 audit(1707808551.531:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.669623 kernel: audit: type=1131 audit(1707808551.531:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.669777 ignition[1059]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:15:51.669777 ignition[1059]: INFO : files: op(16): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 07:15:51.669777 ignition[1059]: INFO : files: op(16): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 07:15:51.669777 ignition[1059]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:15:51.669777 ignition[1059]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:15:51.669777 ignition[1059]: INFO : files: createResultFile: createFiles: op(18): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:15:51.669777 ignition[1059]: INFO : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:15:51.669777 ignition[1059]: INFO : files: files passed Feb 13 07:15:51.669777 ignition[1059]: INFO : POST message to Packet Timeline Feb 13 07:15:51.669777 ignition[1059]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:51.669777 ignition[1059]: INFO : GET result: OK Feb 13 07:15:51.669777 ignition[1059]: INFO : Ignition finished successfully Feb 13 07:15:52.015595 kernel: audit: type=1130 audit(1707808551.709:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.015728 kernel: audit: type=1131 audit(1707808551.709:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.015743 kernel: audit: type=1130 audit(1707808551.888:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.314871 systemd[1]: Finished ignition-files.service. Feb 13 07:15:52.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.342221 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 07:15:52.102623 kernel: audit: type=1131 audit(1707808552.023:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.102641 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 07:15:51.403598 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 07:15:51.403968 systemd[1]: Starting ignition-quench.service... Feb 13 07:15:51.443814 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 07:15:51.464864 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 07:15:51.464952 systemd[1]: Finished ignition-quench.service. Feb 13 07:15:51.532559 systemd[1]: Reached target ignition-complete.target. Feb 13 07:15:51.655966 systemd[1]: Starting initrd-parse-etc.service... Feb 13 07:15:51.682509 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 07:15:51.682551 systemd[1]: Finished initrd-parse-etc.service. Feb 13 07:15:51.710551 systemd[1]: Reached target initrd-fs.target. Feb 13 07:15:52.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.833601 systemd[1]: Reached target initrd.target. Feb 13 07:15:52.375611 kernel: audit: type=1131 audit(1707808552.289:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.833658 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 07:15:52.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.833999 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 07:15:52.462639 kernel: audit: type=1131 audit(1707808552.384:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:51.872740 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 07:15:51.889333 systemd[1]: Starting initrd-cleanup.service... Feb 13 07:15:51.958318 systemd[1]: Stopped target nss-lookup.target. Feb 13 07:15:51.976623 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 07:15:51.986752 systemd[1]: Stopped target timers.target. Feb 13 07:15:52.000767 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 07:15:52.000867 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 07:15:52.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.023847 systemd[1]: Stopped target initrd.target. Feb 13 07:15:52.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.092687 systemd[1]: Stopped target basic.target. Feb 13 07:15:52.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.109649 systemd[1]: Stopped target ignition-complete.target. Feb 13 07:15:52.606498 ignition[1108]: INFO : Ignition 2.14.0 Feb 13 07:15:52.606498 ignition[1108]: INFO : Stage: umount Feb 13 07:15:52.606498 ignition[1108]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:52.606498 ignition[1108]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:52.606498 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:52.606498 ignition[1108]: INFO : umount: umount passed Feb 13 07:15:52.606498 ignition[1108]: INFO : POST message to Packet Timeline Feb 13 07:15:52.606498 ignition[1108]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:52.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.132669 systemd[1]: Stopped target ignition-diskful.target. Feb 13 07:15:52.758804 ignition[1108]: INFO : GET result: OK Feb 13 07:15:52.157707 systemd[1]: Stopped target initrd-root-device.target. Feb 13 07:15:52.172818 systemd[1]: Stopped target remote-fs.target. Feb 13 07:15:52.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.190939 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 07:15:52.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.207998 systemd[1]: Stopped target sysinit.target. Feb 13 07:15:52.225096 systemd[1]: Stopped target local-fs.target. Feb 13 07:15:52.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.240957 systemd[1]: Stopped target local-fs-pre.target. Feb 13 07:15:52.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.846000 audit: BPF prog-id=6 op=UNLOAD Feb 13 07:15:52.853605 ignition[1108]: INFO : Ignition finished successfully Feb 13 07:15:52.257958 systemd[1]: Stopped target swap.target. Feb 13 07:15:52.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.273826 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 07:15:52.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.274186 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 07:15:52.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.291191 systemd[1]: Stopped target cryptsetup.target. Feb 13 07:15:52.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.368662 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 07:15:52.368760 systemd[1]: Stopped dracut-initqueue.service. Feb 13 07:15:52.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.384750 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 07:15:52.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.384824 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 07:15:52.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.453774 systemd[1]: Stopped target paths.target. Feb 13 07:15:52.469659 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 07:15:53.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.473606 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 07:15:52.490666 systemd[1]: Stopped target slices.target. Feb 13 07:15:52.504665 systemd[1]: Stopped target sockets.target. Feb 13 07:15:53.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.521808 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 07:15:53.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.521910 systemd[1]: Closed iscsid.socket. Feb 13 07:15:53.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.535861 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 07:15:52.536103 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 07:15:53.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.555172 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 07:15:53.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.555539 systemd[1]: Stopped ignition-files.service. Feb 13 07:15:53.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.570031 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 07:15:53.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:53.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.570398 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 07:15:52.587141 systemd[1]: Stopping ignition-mount.service... Feb 13 07:15:52.598605 systemd[1]: Stopping iscsiuio.service... Feb 13 07:15:52.614040 systemd[1]: Stopping sysroot-boot.service... Feb 13 07:15:52.627472 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 07:15:52.627589 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 07:15:52.654963 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 07:15:52.655194 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 07:15:52.690645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 07:15:52.692752 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 07:15:52.692985 systemd[1]: Stopped iscsiuio.service. Feb 13 07:15:52.702719 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 07:15:53.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:52.702937 systemd[1]: Stopped sysroot-boot.service. Feb 13 07:15:52.718002 systemd[1]: Stopped target network.target. Feb 13 07:15:52.725888 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 07:15:52.725990 systemd[1]: Closed iscsiuio.socket. Feb 13 07:15:53.368381 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Feb 13 07:15:52.751123 systemd[1]: Stopping systemd-networkd.service... Feb 13 07:15:53.368459 iscsid[907]: iscsid shutting down. Feb 13 07:15:52.762527 systemd-networkd[879]: enp1s0f1np1: DHCPv6 lease lost Feb 13 07:15:52.765895 systemd[1]: Stopping systemd-resolved.service... Feb 13 07:15:52.769498 systemd-networkd[879]: enp1s0f0np0: DHCPv6 lease lost Feb 13 07:15:53.367000 audit: BPF prog-id=9 op=UNLOAD Feb 13 07:15:52.780494 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 07:15:52.780731 systemd[1]: Stopped systemd-resolved.service. Feb 13 07:15:52.797712 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 07:15:52.797962 systemd[1]: Stopped systemd-networkd.service. Feb 13 07:15:52.812181 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 07:15:52.812480 systemd[1]: Finished initrd-cleanup.service. Feb 13 07:15:52.831739 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 07:15:52.831789 systemd[1]: Stopped ignition-mount.service. Feb 13 07:15:52.847233 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 07:15:52.847260 systemd[1]: Closed systemd-networkd.socket. Feb 13 07:15:52.861564 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 07:15:52.861634 systemd[1]: Stopped ignition-disks.service. Feb 13 07:15:52.878776 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 07:15:52.878912 systemd[1]: Stopped ignition-kargs.service. Feb 13 07:15:52.894780 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 07:15:52.894929 systemd[1]: Stopped ignition-setup.service. Feb 13 07:15:52.911774 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 07:15:52.911920 systemd[1]: Stopped initrd-setup-root.service. Feb 13 07:15:52.928460 systemd[1]: Stopping network-cleanup.service... Feb 13 07:15:52.941585 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 07:15:52.941730 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 07:15:52.957744 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:15:52.957872 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:15:52.972856 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 07:15:52.972979 systemd[1]: Stopped systemd-modules-load.service. Feb 13 07:15:52.991014 systemd[1]: Stopping systemd-udevd.service... Feb 13 07:15:53.009310 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 07:15:53.010758 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 07:15:53.011073 systemd[1]: Stopped systemd-udevd.service. Feb 13 07:15:53.023468 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 07:15:53.023593 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 07:15:53.038730 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 07:15:53.038830 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 07:15:53.054713 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 07:15:53.054882 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 07:15:53.069775 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 07:15:53.069897 systemd[1]: Stopped dracut-cmdline.service. Feb 13 07:15:53.084808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 07:15:53.084977 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 07:15:53.103426 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 07:15:53.117527 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 07:15:53.117553 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 07:15:53.135521 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 07:15:53.135553 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 07:15:53.151498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 07:15:53.151541 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 07:15:53.169010 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 07:15:53.169911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 07:15:53.170056 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 07:15:53.280157 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 07:15:53.280406 systemd[1]: Stopped network-cleanup.service. Feb 13 07:15:53.288899 systemd[1]: Reached target initrd-switch-root.target. Feb 13 07:15:53.306190 systemd[1]: Starting initrd-switch-root.service... Feb 13 07:15:53.323241 systemd[1]: Switching root. Feb 13 07:15:53.370238 systemd-journald[268]: Journal stopped Feb 13 07:15:57.311483 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 07:15:57.311496 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 07:15:57.311504 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 07:15:57.311510 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 07:15:57.311515 kernel: SELinux: policy capability open_perms=1 Feb 13 07:15:57.311520 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 07:15:57.311525 kernel: SELinux: policy capability always_check_network=0 Feb 13 07:15:57.311531 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 07:15:57.311536 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 07:15:57.311542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 07:15:57.311547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 07:15:57.311553 systemd[1]: Successfully loaded SELinux policy in 322.507ms. Feb 13 07:15:57.311559 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.907ms. Feb 13 07:15:57.311566 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:15:57.311574 systemd[1]: Detected architecture x86-64. Feb 13 07:15:57.311580 systemd[1]: Detected first boot. Feb 13 07:15:57.311585 systemd[1]: Hostname set to . Feb 13 07:15:57.311592 systemd[1]: Initializing machine ID from random generator. Feb 13 07:15:57.311597 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 07:15:57.311603 systemd[1]: Populated /etc with preset unit settings. Feb 13 07:15:57.311609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:15:57.311617 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:15:57.311624 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:15:57.311630 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 07:15:57.311636 systemd[1]: Stopped iscsid.service. Feb 13 07:15:57.311641 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 07:15:57.311648 systemd[1]: Stopped initrd-switch-root.service. Feb 13 07:15:57.311654 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 07:15:57.311661 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 07:15:57.311667 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 07:15:57.311673 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 07:15:57.311679 systemd[1]: Created slice system-getty.slice. Feb 13 07:15:57.311684 systemd[1]: Created slice system-modprobe.slice. Feb 13 07:15:57.311690 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 07:15:57.311696 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 07:15:57.311702 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 07:15:57.311709 systemd[1]: Created slice user.slice. Feb 13 07:15:57.311715 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:15:57.311721 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 07:15:57.311727 systemd[1]: Set up automount boot.automount. Feb 13 07:15:57.311735 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 07:15:57.311741 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 07:15:57.311747 systemd[1]: Stopped target initrd-fs.target. Feb 13 07:15:57.311753 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 07:15:57.311760 systemd[1]: Reached target integritysetup.target. Feb 13 07:15:57.311767 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:15:57.311773 systemd[1]: Reached target remote-fs.target. Feb 13 07:15:57.311779 systemd[1]: Reached target slices.target. Feb 13 07:15:57.311785 systemd[1]: Reached target swap.target. Feb 13 07:15:57.311791 systemd[1]: Reached target torcx.target. Feb 13 07:15:57.311797 systemd[1]: Reached target veritysetup.target. Feb 13 07:15:57.311803 systemd[1]: Listening on systemd-coredump.socket. Feb 13 07:15:57.311811 systemd[1]: Listening on systemd-initctl.socket. Feb 13 07:15:57.311817 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:15:57.311824 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:15:57.311830 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:15:57.311836 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 07:15:57.311843 systemd[1]: Mounting dev-hugepages.mount... Feb 13 07:15:57.311850 systemd[1]: Mounting dev-mqueue.mount... Feb 13 07:15:57.311856 systemd[1]: Mounting media.mount... Feb 13 07:15:57.311863 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:15:57.311869 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 07:15:57.311875 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 07:15:57.311881 systemd[1]: Mounting tmp.mount... Feb 13 07:15:57.311888 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 07:15:57.311894 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 07:15:57.311900 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:15:57.311907 systemd[1]: Starting modprobe@configfs.service... Feb 13 07:15:57.311914 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 07:15:57.311920 systemd[1]: Starting modprobe@drm.service... Feb 13 07:15:57.311926 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 07:15:57.311933 systemd[1]: Starting modprobe@fuse.service... Feb 13 07:15:57.311939 kernel: fuse: init (API version 7.34) Feb 13 07:15:57.311945 systemd[1]: Starting modprobe@loop.service... Feb 13 07:15:57.311951 kernel: loop: module loaded Feb 13 07:15:57.311958 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 07:15:57.311964 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 07:15:57.311971 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 07:15:57.311977 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 07:15:57.311983 kernel: kauditd_printk_skb: 60 callbacks suppressed Feb 13 07:15:57.311989 kernel: audit: type=1131 audit(1707808556.952:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.311995 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 07:15:57.312001 kernel: audit: type=1131 audit(1707808557.040:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.312008 systemd[1]: Stopped systemd-journald.service. Feb 13 07:15:57.312015 kernel: audit: type=1130 audit(1707808557.104:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.312020 kernel: audit: type=1131 audit(1707808557.104:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.312026 kernel: audit: type=1334 audit(1707808557.189:107): prog-id=15 op=LOAD Feb 13 07:15:57.312032 kernel: audit: type=1334 audit(1707808557.207:108): prog-id=16 op=LOAD Feb 13 07:15:57.312037 kernel: audit: type=1334 audit(1707808557.225:109): prog-id=17 op=LOAD Feb 13 07:15:57.312043 kernel: audit: type=1334 audit(1707808557.243:110): prog-id=13 op=UNLOAD Feb 13 07:15:57.312049 systemd[1]: Starting systemd-journald.service... Feb 13 07:15:57.312056 kernel: audit: type=1334 audit(1707808557.243:111): prog-id=14 op=UNLOAD Feb 13 07:15:57.312062 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:15:57.312068 kernel: audit: type=1305 audit(1707808557.308:112): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 07:15:57.312075 systemd-journald[1259]: Journal started Feb 13 07:15:57.312100 systemd-journald[1259]: Runtime Journal (/run/log/journal/309e82b2ec3b4ae9994092929be914d7) is 8.0M, max 640.1M, 632.1M free. Feb 13 07:15:53.761000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 07:15:54.030000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:15:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:15:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:15:54.033000 audit: BPF prog-id=10 op=LOAD Feb 13 07:15:54.033000 audit: BPF prog-id=10 op=UNLOAD Feb 13 07:15:54.033000 audit: BPF prog-id=11 op=LOAD Feb 13 07:15:54.033000 audit: BPF prog-id=11 op=UNLOAD Feb 13 07:15:54.101000 audit[1149]: AVC avc: denied { associate } for pid=1149 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 07:15:54.101000 audit[1149]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58dc a1=c00002ce58 a2=c00002bb00 a3=32 items=0 ppid=1132 pid=1149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:54.101000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:15:54.126000 audit[1149]: AVC avc: denied { associate } for pid=1149 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 07:15:54.126000 audit[1149]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b5 a2=1ed a3=0 items=2 ppid=1132 pid=1149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:54.126000 audit: CWD cwd="/" Feb 13 07:15:54.126000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:54.126000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:54.126000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:15:55.656000 audit: BPF prog-id=12 op=LOAD Feb 13 07:15:55.656000 audit: BPF prog-id=3 op=UNLOAD Feb 13 07:15:55.656000 audit: BPF prog-id=13 op=LOAD Feb 13 07:15:55.656000 audit: BPF prog-id=14 op=LOAD Feb 13 07:15:55.656000 audit: BPF prog-id=4 op=UNLOAD Feb 13 07:15:55.656000 audit: BPF prog-id=5 op=UNLOAD Feb 13 07:15:55.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:55.704000 audit: BPF prog-id=12 op=UNLOAD Feb 13 07:15:55.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:55.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:55.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:56.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.189000 audit: BPF prog-id=15 op=LOAD Feb 13 07:15:57.207000 audit: BPF prog-id=16 op=LOAD Feb 13 07:15:57.225000 audit: BPF prog-id=17 op=LOAD Feb 13 07:15:57.243000 audit: BPF prog-id=13 op=UNLOAD Feb 13 07:15:57.243000 audit: BPF prog-id=14 op=UNLOAD Feb 13 07:15:57.308000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 07:15:55.656048 systemd[1]: Queued start job for default target multi-user.target. Feb 13 07:15:54.099781 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:15:55.658665 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 07:15:54.100296 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:15:54.100312 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:15:54.100335 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 07:15:54.100343 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 07:15:54.100366 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 07:15:54.100383 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 07:15:54.100526 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 07:15:54.100556 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:15:54.100566 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:15:54.101032 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 07:15:54.101061 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 07:15:54.101076 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 07:15:54.101086 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 07:15:54.101098 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 07:15:54.101109 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 07:15:55.303444 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:55.303584 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:55.303638 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:55.303733 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:55.303762 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 07:15:55.303797 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 07:15:57.308000 audit[1259]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd20f738e0 a2=4000 a3=7ffd20f7397c items=0 ppid=1 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:57.308000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 07:15:57.390575 systemd[1]: Starting systemd-network-generator.service... Feb 13 07:15:57.417378 systemd[1]: Starting systemd-remount-fs.service... Feb 13 07:15:57.444438 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:15:57.487441 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 07:15:57.487494 systemd[1]: Stopped verity-setup.service. Feb 13 07:15:57.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.532421 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:15:57.552560 systemd[1]: Started systemd-journald.service. Feb 13 07:15:57.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.560926 systemd[1]: Mounted dev-hugepages.mount. Feb 13 07:15:57.568647 systemd[1]: Mounted dev-mqueue.mount. Feb 13 07:15:57.575625 systemd[1]: Mounted media.mount. Feb 13 07:15:57.582637 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 07:15:57.591622 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 07:15:57.599606 systemd[1]: Mounted tmp.mount. Feb 13 07:15:57.606680 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 07:15:57.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.614710 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:15:57.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.622736 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 07:15:57.622870 systemd[1]: Finished modprobe@configfs.service. Feb 13 07:15:57.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.631840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 07:15:57.632008 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 07:15:57.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.640882 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 07:15:57.641032 systemd[1]: Finished modprobe@drm.service. Feb 13 07:15:57.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.650112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 07:15:57.650342 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 07:15:57.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.659199 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 07:15:57.659526 systemd[1]: Finished modprobe@fuse.service. Feb 13 07:15:57.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.668177 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 07:15:57.668536 systemd[1]: Finished modprobe@loop.service. Feb 13 07:15:57.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.677318 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:15:57.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.686181 systemd[1]: Finished systemd-network-generator.service. Feb 13 07:15:57.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.695180 systemd[1]: Finished systemd-remount-fs.service. Feb 13 07:15:57.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.704183 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:15:57.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.713756 systemd[1]: Reached target network-pre.target. Feb 13 07:15:57.725186 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 07:15:57.734069 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 07:15:57.741569 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 07:15:57.742443 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 07:15:57.750014 systemd[1]: Starting systemd-journal-flush.service... Feb 13 07:15:57.754033 systemd-journald[1259]: Time spent on flushing to /var/log/journal/309e82b2ec3b4ae9994092929be914d7 is 16.184ms for 1592 entries. Feb 13 07:15:57.754033 systemd-journald[1259]: System Journal (/var/log/journal/309e82b2ec3b4ae9994092929be914d7) is 8.0M, max 195.6M, 187.6M free. Feb 13 07:15:57.794130 systemd-journald[1259]: Received client request to flush runtime journal. Feb 13 07:15:57.767510 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 07:15:57.768911 systemd[1]: Starting systemd-random-seed.service... Feb 13 07:15:57.781448 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 07:15:57.782168 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:15:57.789152 systemd[1]: Starting systemd-sysusers.service... Feb 13 07:15:57.796061 systemd[1]: Starting systemd-udev-settle.service... Feb 13 07:15:57.803672 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 07:15:57.811550 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 07:15:57.819619 systemd[1]: Finished systemd-journal-flush.service. Feb 13 07:15:57.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.827613 systemd[1]: Finished systemd-random-seed.service. Feb 13 07:15:57.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.835604 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:15:57.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.843641 systemd[1]: Finished systemd-sysusers.service. Feb 13 07:15:57.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:57.852607 systemd[1]: Reached target first-boot-complete.target. Feb 13 07:15:57.861113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:15:57.870374 udevadm[1275]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 07:15:57.879789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:15:57.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.049183 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 07:15:58.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.058000 audit: BPF prog-id=18 op=LOAD Feb 13 07:15:58.058000 audit: BPF prog-id=19 op=LOAD Feb 13 07:15:58.058000 audit: BPF prog-id=7 op=UNLOAD Feb 13 07:15:58.058000 audit: BPF prog-id=8 op=UNLOAD Feb 13 07:15:58.059679 systemd[1]: Starting systemd-udevd.service... Feb 13 07:15:58.071017 systemd-udevd[1278]: Using default interface naming scheme 'v252'. Feb 13 07:15:58.089349 systemd[1]: Started systemd-udevd.service. Feb 13 07:15:58.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.100357 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 07:15:58.099000 audit: BPF prog-id=20 op=LOAD Feb 13 07:15:58.101578 systemd[1]: Starting systemd-networkd.service... Feb 13 07:15:58.126000 audit: BPF prog-id=21 op=LOAD Feb 13 07:15:58.126000 audit: BPF prog-id=22 op=LOAD Feb 13 07:15:58.126000 audit: BPF prog-id=23 op=LOAD Feb 13 07:15:58.128381 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 07:15:58.129838 systemd[1]: Starting systemd-userdbd.service... Feb 13 07:15:58.143380 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 07:15:58.143431 kernel: IPMI message handler: version 39.2 Feb 13 07:15:58.143452 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1352) Feb 13 07:15:58.147417 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 07:15:58.189382 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 07:15:58.130000 audit[1342]: AVC avc: denied { confidentiality } for pid=1342 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:15:58.248383 kernel: ipmi device interface Feb 13 07:15:58.130000 audit[1342]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e64ce460d0 a1=4d8bc a2=7fa672aafbc5 a3=5 items=42 ppid=1278 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:58.130000 audit: CWD cwd="/" Feb 13 07:15:58.130000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=1 name=(null) inode=11153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=2 name=(null) inode=11153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=3 name=(null) inode=11154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=4 name=(null) inode=11153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=5 name=(null) inode=11155 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=6 name=(null) inode=11153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=7 name=(null) inode=11156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=8 name=(null) inode=11156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=9 name=(null) inode=11157 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=10 name=(null) inode=11156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=11 name=(null) inode=11158 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=12 name=(null) inode=11156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=13 name=(null) inode=11159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=14 name=(null) inode=11156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=15 name=(null) inode=11160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=16 name=(null) inode=11156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=17 name=(null) inode=11161 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=18 name=(null) inode=11153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=19 name=(null) inode=11162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=20 name=(null) inode=11162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=21 name=(null) inode=11163 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=22 name=(null) inode=11162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=23 name=(null) inode=11164 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=24 name=(null) inode=11162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=25 name=(null) inode=11165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=26 name=(null) inode=11162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=27 name=(null) inode=11166 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=28 name=(null) inode=11162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=29 name=(null) inode=11167 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=30 name=(null) inode=11153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=31 name=(null) inode=11168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=32 name=(null) inode=11168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=33 name=(null) inode=11169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=34 name=(null) inode=11168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=35 name=(null) inode=11170 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=36 name=(null) inode=11168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=37 name=(null) inode=11171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=38 name=(null) inode=11168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=39 name=(null) inode=11172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=40 name=(null) inode=11168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PATH item=41 name=(null) inode=11173 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:58.130000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 07:15:58.277384 kernel: ACPI: button: Power Button [PWRF] Feb 13 07:15:58.290380 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 07:15:58.290687 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 07:15:58.290777 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 07:15:58.293806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:15:58.322538 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 07:15:58.378571 systemd[1]: Started systemd-userdbd.service. Feb 13 07:15:58.384456 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 07:15:58.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.425732 kernel: ipmi_si: IPMI System Interface driver Feb 13 07:15:58.425759 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 07:15:58.425826 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 07:15:58.446516 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 07:15:58.465891 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 07:15:58.506750 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 07:15:58.530378 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 07:15:58.571252 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 07:15:58.571429 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 07:15:58.571444 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 07:15:58.635885 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 07:15:58.636081 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 07:15:58.636143 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 07:15:58.708711 systemd-networkd[1318]: bond0: netdev ready Feb 13 07:15:58.711239 systemd-networkd[1318]: lo: Link UP Feb 13 07:15:58.711242 systemd-networkd[1318]: lo: Gained carrier Feb 13 07:15:58.711795 systemd-networkd[1318]: Enumeration completed Feb 13 07:15:58.711859 systemd[1]: Started systemd-networkd.service. Feb 13 07:15:58.712108 systemd-networkd[1318]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 07:15:58.723211 systemd-networkd[1318]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:79.network. Feb 13 07:15:58.726289 kernel: intel_rapl_common: Found RAPL domain package Feb 13 07:15:58.726312 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 07:15:58.726407 kernel: intel_rapl_common: Found RAPL domain core Feb 13 07:15:58.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.759905 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 07:15:58.809417 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 07:15:58.828415 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 07:15:58.831606 systemd[1]: Finished systemd-udev-settle.service. Feb 13 07:15:58.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.840069 systemd[1]: Starting lvm2-activation-early.service... Feb 13 07:15:58.855704 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:15:58.879774 systemd[1]: Finished lvm2-activation-early.service. Feb 13 07:15:58.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.888455 systemd[1]: Reached target cryptsetup.target. Feb 13 07:15:58.897992 systemd[1]: Starting lvm2-activation.service... Feb 13 07:15:58.900069 lvm[1382]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:15:58.931763 systemd[1]: Finished lvm2-activation.service. Feb 13 07:15:58.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:58.940451 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:15:58.949417 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 07:15:58.949431 systemd[1]: Reached target local-fs.target. Feb 13 07:15:58.958422 systemd[1]: Reached target machines.target. Feb 13 07:15:58.968001 systemd[1]: Starting ldconfig.service... Feb 13 07:15:58.975883 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 07:15:58.975903 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:15:58.976402 systemd[1]: Starting systemd-boot-update.service... Feb 13 07:15:58.983884 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 07:15:58.994915 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 07:15:58.994991 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:15:58.995017 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:15:58.995503 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 07:15:58.995713 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1384 (bootctl) Feb 13 07:15:58.996276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 07:15:59.006518 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 07:15:59.007194 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 07:15:59.008043 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 07:15:59.011634 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 07:15:59.011930 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 07:15:59.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:59.016724 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 07:15:59.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:59.072743 systemd-fsck[1392]: fsck.fat 4.2 (2021-01-31) Feb 13 07:15:59.072743 systemd-fsck[1392]: /dev/sda1: 789 files, 115339/258078 clusters Feb 13 07:15:59.073513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 07:15:59.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:59.086195 systemd[1]: Mounting boot.mount... Feb 13 07:15:59.098843 systemd[1]: Mounted boot.mount. Feb 13 07:15:59.117704 systemd[1]: Finished systemd-boot-update.service. Feb 13 07:15:59.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:59.150541 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 07:15:59.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:59.159161 systemd[1]: Starting audit-rules.service... Feb 13 07:15:59.165990 systemd[1]: Starting clean-ca-certificates.service... Feb 13 07:15:59.174986 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 07:15:59.179000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 07:15:59.179000 audit[1412]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe250d06f0 a2=420 a3=0 items=0 ppid=1395 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:59.179000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 07:15:59.181393 augenrules[1412]: No rules Feb 13 07:15:59.185332 systemd[1]: Starting systemd-resolved.service... Feb 13 07:15:59.194254 systemd[1]: Starting systemd-timesyncd.service... Feb 13 07:15:59.202879 systemd[1]: Starting systemd-update-utmp.service... Feb 13 07:15:59.209656 systemd[1]: Finished audit-rules.service. Feb 13 07:15:59.216582 systemd[1]: Finished clean-ca-certificates.service. Feb 13 07:15:59.224565 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 07:15:59.236718 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 07:15:59.237416 systemd[1]: Finished systemd-update-utmp.service. Feb 13 07:15:59.259801 ldconfig[1383]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 07:15:59.263012 systemd[1]: Finished ldconfig.service. Feb 13 07:15:59.267337 systemd-resolved[1417]: Positive Trust Anchors: Feb 13 07:15:59.267343 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:15:59.267361 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:15:59.271036 systemd-resolved[1417]: Using system hostname 'ci-3510.3.2-a-596fb49211'. Feb 13 07:15:59.275534 systemd[1]: Started systemd-timesyncd.service. Feb 13 07:15:59.284432 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:15:59.300532 systemd[1]: Reached target time-set.target. Feb 13 07:15:59.310379 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 07:15:59.312030 systemd-networkd[1318]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 13 07:15:59.319070 systemd[1]: Starting systemd-update-done.service... Feb 13 07:15:59.325561 systemd[1]: Finished systemd-update-done.service. Feb 13 07:15:59.362406 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:15:59.488427 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:15:59.488495 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 07:15:59.529417 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 07:15:59.529447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 07:15:59.529638 systemd[1]: Started systemd-resolved.service. Feb 13 07:15:59.563494 systemd[1]: Reached target network.target. Feb 13 07:15:59.570158 systemd-networkd[1318]: bond0: Link UP Feb 13 07:15:59.570352 systemd-networkd[1318]: enp1s0f1np1: Link UP Feb 13 07:15:59.570399 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 07:15:59.570414 kernel: bond0: active interface up! Feb 13 07:15:59.570509 systemd-networkd[1318]: enp1s0f1np1: Gained carrier Feb 13 07:15:59.571482 systemd-networkd[1318]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 13 07:15:59.594454 systemd[1]: Reached target nss-lookup.target. Feb 13 07:15:59.611464 systemd[1]: Reached target sysinit.target. Feb 13 07:15:59.617395 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:15:59.625505 systemd[1]: Started motdgen.path. Feb 13 07:15:59.632480 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 07:15:59.642537 systemd[1]: Started logrotate.timer. Feb 13 07:15:59.649502 systemd[1]: Started mdadm.timer. Feb 13 07:15:59.656453 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 07:15:59.664442 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 07:15:59.664464 systemd[1]: Reached target paths.target. Feb 13 07:15:59.668174 systemd-networkd[1318]: enp1s0f0np0: Link UP Feb 13 07:15:59.668327 systemd-networkd[1318]: bond0: Gained carrier Feb 13 07:15:59.668423 systemd-networkd[1318]: enp1s0f0np0: Gained carrier Feb 13 07:15:59.668502 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.671464 systemd[1]: Reached target timers.target. Feb 13 07:15:59.675644 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.675678 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.675965 systemd-networkd[1318]: enp1s0f1np1: Link DOWN Feb 13 07:15:59.675974 systemd-networkd[1318]: enp1s0f1np1: Lost carrier Feb 13 07:15:59.687620 systemd[1]: Listening on dbus.socket. Feb 13 07:15:59.697377 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:15:59.697421 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 13 07:15:59.729987 systemd[1]: Starting docker.socket... Feb 13 07:15:59.735374 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 07:15:59.736558 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.739690 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.742905 systemd[1]: Listening on sshd.socket. Feb 13 07:15:59.749550 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:15:59.749970 systemd[1]: Listening on docker.socket. Feb 13 07:15:59.756554 systemd[1]: Reached target sockets.target. Feb 13 07:15:59.764468 systemd[1]: Reached target basic.target. Feb 13 07:15:59.771483 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:15:59.771497 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:15:59.771983 systemd[1]: Starting containerd.service... Feb 13 07:15:59.778900 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 07:15:59.787936 systemd[1]: Starting coreos-metadata.service... Feb 13 07:15:59.794965 systemd[1]: Starting dbus.service... Feb 13 07:15:59.801015 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 07:15:59.806301 jq[1433]: false Feb 13 07:15:59.807626 coreos-metadata[1426]: Feb 13 07:15:59.807 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:59.808934 systemd[1]: Starting extend-filesystems.service... Feb 13 07:15:59.813357 dbus-daemon[1432]: [system] SELinux support is enabled Feb 13 07:15:59.816355 extend-filesystems[1435]: Found sda Feb 13 07:15:59.823591 extend-filesystems[1435]: Found sda1 Feb 13 07:15:59.823591 extend-filesystems[1435]: Found sda2 Feb 13 07:15:59.823591 extend-filesystems[1435]: Found sda3 Feb 13 07:15:59.823591 extend-filesystems[1435]: Found usr Feb 13 07:15:59.823591 extend-filesystems[1435]: Found sda4 Feb 13 07:15:59.823591 extend-filesystems[1435]: Found sda6 Feb 13 07:15:59.823591 extend-filesystems[1435]: Found sda7 Feb 13 07:15:59.823591 extend-filesystems[1435]: Found sda9 Feb 13 07:15:59.823591 extend-filesystems[1435]: Checking size of /dev/sda9 Feb 13 07:15:59.823591 extend-filesystems[1435]: Resized partition /dev/sda9 Feb 13 07:15:59.987563 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 07:15:59.987582 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:15:59.987676 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Feb 13 07:15:59.987687 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 13 07:15:59.987698 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 07:15:59.816460 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 07:15:59.987755 coreos-metadata[1429]: Feb 13 07:15:59.817 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:59.987755 coreos-metadata[1429]: Feb 13 07:15:59.836 INFO Fetch successful Feb 13 07:15:59.987877 coreos-metadata[1426]: Feb 13 07:15:59.827 INFO Fetch successful Feb 13 07:15:59.987901 extend-filesystems[1446]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 07:15:59.817050 systemd[1]: Starting motdgen.service... Feb 13 07:15:59.838156 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 07:15:59.860015 systemd[1]: Starting prepare-critools.service... Feb 13 07:15:59.870262 unknown[1426]: wrote ssh authorized keys file for user: core Feb 13 07:15:59.879969 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 07:15:59.899878 systemd[1]: Starting sshd-keygen.service... Feb 13 07:15:59.928795 systemd[1]: Starting systemd-logind.service... Feb 13 07:15:59.929406 systemd-networkd[1318]: enp1s0f1np1: Link UP Feb 13 07:15:59.929581 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.929606 systemd-networkd[1318]: enp1s0f1np1: Gained carrier Feb 13 07:15:59.929622 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.939510 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.939544 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.939612 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:15:59.952464 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:15:59.953002 systemd[1]: Starting tcsd.service... Feb 13 07:15:59.988764 systemd-logind[1462]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 07:15:59.988774 systemd-logind[1462]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 07:15:59.988783 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 07:15:59.988898 systemd-logind[1462]: New seat seat0. Feb 13 07:15:59.999788 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 07:16:00.000150 systemd[1]: Starting update-engine.service... Feb 13 07:16:00.016111 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 07:16:00.017668 jq[1465]: true Feb 13 07:16:00.024786 systemd[1]: Started dbus.service. Feb 13 07:16:00.033577 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 07:16:00.033686 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 07:16:00.033870 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 07:16:00.033967 systemd[1]: Finished motdgen.service. Feb 13 07:16:00.041315 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 07:16:00.041420 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 07:16:00.043285 update_engine[1464]: I0213 07:16:00.042107 1464 main.cc:92] Flatcar Update Engine starting Feb 13 07:16:00.047578 update_engine[1464]: I0213 07:16:00.047569 1464 update_check_scheduler.cc:74] Next update check in 3m42s Feb 13 07:16:00.052025 jq[1472]: true Feb 13 07:16:00.052512 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 07:16:00.053249 tar[1468]: ./ Feb 13 07:16:00.053249 tar[1468]: ./macvlan Feb 13 07:16:00.055760 tar[1469]: crictl Feb 13 07:16:00.058454 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 07:16:00.058544 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 07:16:00.059383 systemd[1]: Started update-engine.service. Feb 13 07:16:00.067551 systemd[1]: Started systemd-logind.service. Feb 13 07:16:00.068284 env[1473]: time="2024-02-13T07:16:00.068260647Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 07:16:00.068573 update-ssh-keys[1467]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:16:00.075576 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 07:16:00.078318 env[1473]: time="2024-02-13T07:16:00.078290539Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 07:16:00.079027 env[1473]: time="2024-02-13T07:16:00.079013858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:16:00.079703 env[1473]: time="2024-02-13T07:16:00.079659297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:16:00.079703 env[1473]: time="2024-02-13T07:16:00.079674467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:16:00.081393 env[1473]: time="2024-02-13T07:16:00.081378805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:16:00.081421 env[1473]: time="2024-02-13T07:16:00.081396470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 07:16:00.081421 env[1473]: time="2024-02-13T07:16:00.081410577Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 07:16:00.081457 env[1473]: time="2024-02-13T07:16:00.081421155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 07:16:00.081479 env[1473]: time="2024-02-13T07:16:00.081462709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:16:00.081602 env[1473]: time="2024-02-13T07:16:00.081592675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:16:00.081679 env[1473]: time="2024-02-13T07:16:00.081668251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:16:00.081705 env[1473]: time="2024-02-13T07:16:00.081678861Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 07:16:00.081728 env[1473]: time="2024-02-13T07:16:00.081707641Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 07:16:00.081728 env[1473]: time="2024-02-13T07:16:00.081715498Z" level=info msg="metadata content store policy set" policy=shared Feb 13 07:16:00.088714 env[1473]: time="2024-02-13T07:16:00.088702460Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088721150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088729500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088744328Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088753035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088760566Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088767362Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088775167Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088781878Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088788786Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088795290Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.090259 env[1473]: time="2024-02-13T07:16:00.088801699Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 07:16:00.090643 env[1473]: time="2024-02-13T07:16:00.090629442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 07:16:00.090672 tar[1468]: ./static Feb 13 07:16:00.090724 env[1473]: time="2024-02-13T07:16:00.090715177Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 07:16:00.091136 systemd[1]: Started locksmithd.service. Feb 13 07:16:00.091242 env[1473]: time="2024-02-13T07:16:00.091132901Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 07:16:00.091242 env[1473]: time="2024-02-13T07:16:00.091159442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091242 env[1473]: time="2024-02-13T07:16:00.091172219Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 07:16:00.091294 env[1473]: time="2024-02-13T07:16:00.091271212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091294 env[1473]: time="2024-02-13T07:16:00.091283123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091294 env[1473]: time="2024-02-13T07:16:00.091290709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091344 env[1473]: time="2024-02-13T07:16:00.091296939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091344 env[1473]: time="2024-02-13T07:16:00.091303719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091344 env[1473]: time="2024-02-13T07:16:00.091310908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091344 env[1473]: time="2024-02-13T07:16:00.091317486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091344 env[1473]: time="2024-02-13T07:16:00.091324381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.091344 env[1473]: time="2024-02-13T07:16:00.091332721Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 07:16:00.093363 env[1473]: time="2024-02-13T07:16:00.093353420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.093392 env[1473]: time="2024-02-13T07:16:00.093367002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.093392 env[1473]: time="2024-02-13T07:16:00.093379866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.093392 env[1473]: time="2024-02-13T07:16:00.093388554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 07:16:00.093449 env[1473]: time="2024-02-13T07:16:00.093397158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 07:16:00.093449 env[1473]: time="2024-02-13T07:16:00.093404389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 07:16:00.093449 env[1473]: time="2024-02-13T07:16:00.093416783Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 07:16:00.093449 env[1473]: time="2024-02-13T07:16:00.093437621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 07:16:00.093579 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:16:00.093694 env[1473]: time="2024-02-13T07:16:00.093550341Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 07:16:00.093694 env[1473]: time="2024-02-13T07:16:00.093590222Z" level=info msg="Connect containerd service" Feb 13 07:16:00.093694 env[1473]: time="2024-02-13T07:16:00.093608059Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.093897291Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.093985689Z" level=info msg="Start subscribing containerd event" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094023648Z" level=info msg="Start recovering state" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094024651Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094051424Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094074043Z" level=info msg="Start event monitor" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094093489Z" level=info msg="Start snapshots syncer" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094102384Z" level=info msg="Start cni network conf syncer for default" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094110126Z" level=info msg="Start streaming server" Feb 13 07:16:00.095219 env[1473]: time="2024-02-13T07:16:00.094074261Z" level=info msg="containerd successfully booted in 0.027898s" Feb 13 07:16:00.097538 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 07:16:00.097654 systemd[1]: Reached target system-config.target. Feb 13 07:16:00.105476 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 07:16:00.105577 systemd[1]: Reached target user-config.target. Feb 13 07:16:00.108320 tar[1468]: ./vlan Feb 13 07:16:00.114922 systemd[1]: Started containerd.service. Feb 13 07:16:00.121639 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 07:16:00.129684 tar[1468]: ./portmap Feb 13 07:16:00.146689 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 07:16:00.149907 tar[1468]: ./host-local Feb 13 07:16:00.167743 tar[1468]: ./vrf Feb 13 07:16:00.186965 tar[1468]: ./bridge Feb 13 07:16:00.209960 tar[1468]: ./tuning Feb 13 07:16:00.228341 tar[1468]: ./firewall Feb 13 07:16:00.252150 tar[1468]: ./host-device Feb 13 07:16:00.272907 tar[1468]: ./sbr Feb 13 07:16:00.291868 tar[1468]: ./loopback Feb 13 07:16:00.309857 tar[1468]: ./dhcp Feb 13 07:16:00.342421 systemd[1]: Finished prepare-critools.service. Feb 13 07:16:00.362230 tar[1468]: ./ptp Feb 13 07:16:00.384430 tar[1468]: ./ipvlan Feb 13 07:16:00.403380 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 07:16:00.431187 extend-filesystems[1446]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 07:16:00.431187 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 07:16:00.431187 extend-filesystems[1446]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 07:16:00.468477 extend-filesystems[1435]: Resized filesystem in /dev/sda9 Feb 13 07:16:00.468477 extend-filesystems[1435]: Found sdb Feb 13 07:16:00.431666 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 07:16:00.491503 tar[1468]: ./bandwidth Feb 13 07:16:00.431750 systemd[1]: Finished extend-filesystems.service. Feb 13 07:16:00.460005 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 07:16:00.569660 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 07:16:00.581047 systemd[1]: Finished sshd-keygen.service. Feb 13 07:16:00.589196 systemd[1]: Starting issuegen.service... Feb 13 07:16:00.595663 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 07:16:00.595735 systemd[1]: Finished issuegen.service. Feb 13 07:16:00.604150 systemd[1]: Starting systemd-user-sessions.service... Feb 13 07:16:00.613641 systemd[1]: Finished systemd-user-sessions.service. Feb 13 07:16:00.623263 systemd[1]: Started getty@tty1.service. Feb 13 07:16:00.631227 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 07:16:00.639573 systemd[1]: Reached target getty.target. Feb 13 07:16:01.147689 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:16:01.339490 systemd-networkd[1318]: bond0: Gained IPv6LL Feb 13 07:16:01.339740 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Feb 13 07:16:02.234410 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 07:16:05.650936 login[1535]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:16:05.657884 systemd[1]: Created slice user-500.slice. Feb 13 07:16:05.658456 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 07:16:05.659288 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:16:05.659311 systemd-logind[1462]: New session 1 of user core. Feb 13 07:16:05.661100 systemd-logind[1462]: New session 2 of user core. Feb 13 07:16:05.663291 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 07:16:05.663984 systemd[1]: Starting user@500.service... Feb 13 07:16:05.665636 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:05.750820 systemd[1539]: Queued start job for default target default.target. Feb 13 07:16:05.751053 systemd[1539]: Reached target paths.target. Feb 13 07:16:05.751065 systemd[1539]: Reached target sockets.target. Feb 13 07:16:05.751074 systemd[1539]: Reached target timers.target. Feb 13 07:16:05.751081 systemd[1539]: Reached target basic.target. Feb 13 07:16:05.751100 systemd[1539]: Reached target default.target. Feb 13 07:16:05.751114 systemd[1539]: Startup finished in 82ms. Feb 13 07:16:05.751161 systemd[1]: Started user@500.service. Feb 13 07:16:05.751717 systemd[1]: Started session-1.scope. Feb 13 07:16:05.752064 systemd[1]: Started session-2.scope. Feb 13 07:16:07.682381 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 07:16:07.682580 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 07:16:07.952767 systemd[1]: Created slice system-sshd.slice. Feb 13 07:16:07.953271 systemd[1]: Started sshd@0-139.178.90.101:22-139.178.68.195:34462.service. Feb 13 07:16:07.996300 sshd[1560]: Accepted publickey for core from 139.178.68.195 port 34462 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:16:07.997438 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:08.001325 systemd-logind[1462]: New session 3 of user core. Feb 13 07:16:08.002236 systemd[1]: Started session-3.scope. Feb 13 07:16:08.057542 systemd[1]: Started sshd@1-139.178.90.101:22-139.178.68.195:46228.service. Feb 13 07:16:08.088962 sshd[1565]: Accepted publickey for core from 139.178.68.195 port 46228 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:16:08.089683 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:08.091952 systemd-logind[1462]: New session 4 of user core. Feb 13 07:16:08.092425 systemd[1]: Started session-4.scope. Feb 13 07:16:08.142571 sshd[1565]: pam_unix(sshd:session): session closed for user core Feb 13 07:16:08.144845 systemd[1]: sshd@1-139.178.90.101:22-139.178.68.195:46228.service: Deactivated successfully. Feb 13 07:16:08.145417 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 07:16:08.145978 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Feb 13 07:16:08.146920 systemd[1]: Started sshd@2-139.178.90.101:22-139.178.68.195:46242.service. Feb 13 07:16:08.147654 systemd-logind[1462]: Removed session 4. Feb 13 07:16:08.182216 sshd[1571]: Accepted publickey for core from 139.178.68.195 port 46242 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:16:08.183096 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:08.186122 systemd-logind[1462]: New session 5 of user core. Feb 13 07:16:08.186820 systemd[1]: Started session-5.scope. Feb 13 07:16:08.241642 sshd[1571]: pam_unix(sshd:session): session closed for user core Feb 13 07:16:08.242885 systemd[1]: sshd@2-139.178.90.101:22-139.178.68.195:46242.service: Deactivated successfully. Feb 13 07:16:08.243249 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 07:16:08.243631 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Feb 13 07:16:08.244116 systemd-logind[1462]: Removed session 5. Feb 13 07:16:12.981953 systemd[1]: Finished coreos-metadata.service. Feb 13 07:16:12.982730 systemd[1]: Started packet-phone-home.service. Feb 13 07:16:12.982844 systemd[1]: Reached target multi-user.target. Feb 13 07:16:12.983454 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 07:16:12.987574 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 07:16:12.987654 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 07:16:12.987754 curl[1579]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 07:16:12.987754 curl[1579]: Dload Upload Total Spent Left Speed Feb 13 07:16:12.987814 systemd[1]: Startup finished in 1.846s (kernel) + 25.591s (initrd) + 19.566s (userspace) = 47.004s. Feb 13 07:16:13.643078 curl[1579]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 07:16:13.645505 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 07:16:18.251181 systemd[1]: Started sshd@3-139.178.90.101:22-139.178.68.195:40858.service. Feb 13 07:16:18.283962 sshd[1582]: Accepted publickey for core from 139.178.68.195 port 40858 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:16:18.284941 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:18.287820 systemd-logind[1462]: New session 6 of user core. Feb 13 07:16:18.288586 systemd[1]: Started session-6.scope. Feb 13 07:16:18.342989 sshd[1582]: pam_unix(sshd:session): session closed for user core Feb 13 07:16:18.344555 systemd[1]: sshd@3-139.178.90.101:22-139.178.68.195:40858.service: Deactivated successfully. Feb 13 07:16:18.344846 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 07:16:18.345144 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Feb 13 07:16:18.345699 systemd[1]: Started sshd@4-139.178.90.101:22-139.178.68.195:40864.service. Feb 13 07:16:18.346091 systemd-logind[1462]: Removed session 6. Feb 13 07:16:18.378146 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 40864 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:16:18.379096 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:18.382356 systemd-logind[1462]: New session 7 of user core. Feb 13 07:16:18.383187 systemd[1]: Started session-7.scope. Feb 13 07:16:18.437409 sshd[1588]: pam_unix(sshd:session): session closed for user core Feb 13 07:16:18.438988 systemd[1]: sshd@4-139.178.90.101:22-139.178.68.195:40864.service: Deactivated successfully. Feb 13 07:16:18.439281 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 07:16:18.439625 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Feb 13 07:16:18.440161 systemd[1]: Started sshd@5-139.178.90.101:22-139.178.68.195:40868.service. Feb 13 07:16:18.440582 systemd-logind[1462]: Removed session 7. Feb 13 07:16:18.472870 sshd[1594]: Accepted publickey for core from 139.178.68.195 port 40868 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:16:18.473900 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:18.477458 systemd-logind[1462]: New session 8 of user core. Feb 13 07:16:18.478345 systemd[1]: Started session-8.scope. Feb 13 07:16:18.545205 sshd[1594]: pam_unix(sshd:session): session closed for user core Feb 13 07:16:18.551634 systemd[1]: sshd@5-139.178.90.101:22-139.178.68.195:40868.service: Deactivated successfully. Feb 13 07:16:18.553213 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 07:16:18.555012 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Feb 13 07:16:18.557548 systemd[1]: Started sshd@6-139.178.90.101:22-139.178.68.195:40878.service. Feb 13 07:16:18.560185 systemd-logind[1462]: Removed session 8. Feb 13 07:16:18.593233 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 40878 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:16:18.594040 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:16:18.597033 systemd-logind[1462]: New session 9 of user core. Feb 13 07:16:18.597810 systemd[1]: Started session-9.scope. Feb 13 07:16:18.682103 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 07:16:18.682724 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 07:16:22.742552 systemd[1]: Reloading. Feb 13 07:16:22.782011 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2024-02-13T07:16:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:16:22.782027 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2024-02-13T07:16:22Z" level=info msg="torcx already run" Feb 13 07:16:22.855261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:16:22.855274 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:16:22.872417 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:16:22.925197 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 07:16:22.929006 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 07:16:22.929243 systemd[1]: Reached target network-online.target. Feb 13 07:16:22.929897 systemd[1]: Started kubelet.service. Feb 13 07:16:22.952841 kubelet[1694]: E0213 07:16:22.952813 1694 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 13 07:16:22.954262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 07:16:22.954353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 07:16:23.382358 systemd[1]: Stopped kubelet.service. Feb 13 07:16:23.396564 systemd[1]: Reloading. Feb 13 07:16:23.438608 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2024-02-13T07:16:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:16:23.438656 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2024-02-13T07:16:23Z" level=info msg="torcx already run" Feb 13 07:16:23.570962 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:16:23.570973 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:16:23.586963 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:16:23.641978 systemd[1]: Started kubelet.service. Feb 13 07:16:23.664892 kubelet[1848]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 07:16:23.664892 kubelet[1848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:16:23.665109 kubelet[1848]: I0213 07:16:23.664900 1848 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 07:16:23.665667 kubelet[1848]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 07:16:23.665667 kubelet[1848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:16:23.830312 kubelet[1848]: I0213 07:16:23.830302 1848 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 13 07:16:23.830312 kubelet[1848]: I0213 07:16:23.830312 1848 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 07:16:23.830449 kubelet[1848]: I0213 07:16:23.830420 1848 server.go:836] "Client rotation is on, will bootstrap in background" Feb 13 07:16:23.831346 kubelet[1848]: I0213 07:16:23.831308 1848 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 07:16:23.851680 kubelet[1848]: I0213 07:16:23.851667 1848 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 07:16:23.851798 kubelet[1848]: I0213 07:16:23.851791 1848 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 07:16:23.851848 kubelet[1848]: I0213 07:16:23.851840 1848 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 07:16:23.851923 kubelet[1848]: I0213 07:16:23.851861 1848 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 07:16:23.851923 kubelet[1848]: I0213 07:16:23.851872 1848 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 07:16:23.851988 kubelet[1848]: I0213 07:16:23.851937 1848 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:16:23.853338 kubelet[1848]: I0213 07:16:23.853331 1848 kubelet.go:398] "Attempting to sync node with API server" Feb 13 07:16:23.853402 kubelet[1848]: I0213 07:16:23.853342 1848 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 07:16:23.853402 kubelet[1848]: I0213 07:16:23.853353 1848 kubelet.go:297] "Adding apiserver pod source" Feb 13 07:16:23.853402 kubelet[1848]: I0213 07:16:23.853360 1848 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 07:16:23.853490 kubelet[1848]: E0213 07:16:23.853462 1848 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:23.853490 kubelet[1848]: E0213 07:16:23.853463 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:23.853715 kubelet[1848]: I0213 07:16:23.853704 1848 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 07:16:23.853873 kubelet[1848]: W0213 07:16:23.853866 1848 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 07:16:23.854094 kubelet[1848]: I0213 07:16:23.854089 1848 server.go:1186] "Started kubelet" Feb 13 07:16:23.854222 kubelet[1848]: I0213 07:16:23.854213 1848 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 07:16:23.854431 kubelet[1848]: E0213 07:16:23.854382 1848 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 07:16:23.854431 kubelet[1848]: E0213 07:16:23.854428 1848 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 07:16:23.854786 kubelet[1848]: I0213 07:16:23.854778 1848 server.go:451] "Adding debug handlers to kubelet server" Feb 13 07:16:23.863975 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 07:16:23.864048 kubelet[1848]: I0213 07:16:23.864004 1848 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 07:16:23.864135 kubelet[1848]: I0213 07:16:23.864121 1848 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 07:16:23.864186 kubelet[1848]: I0213 07:16:23.864166 1848 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 07:16:23.867580 kubelet[1848]: E0213 07:16:23.867567 1848 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.31\" not found" node="10.67.80.31" Feb 13 07:16:23.873804 kubelet[1848]: I0213 07:16:23.873767 1848 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 07:16:23.873804 kubelet[1848]: I0213 07:16:23.873774 1848 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 07:16:23.873804 kubelet[1848]: I0213 07:16:23.873782 1848 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:16:23.874786 kubelet[1848]: I0213 07:16:23.874749 1848 policy_none.go:49] "None policy: Start" Feb 13 07:16:23.875017 kubelet[1848]: I0213 07:16:23.874982 1848 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 07:16:23.875017 kubelet[1848]: I0213 07:16:23.874994 1848 state_mem.go:35] "Initializing new in-memory state store" Feb 13 07:16:23.877615 systemd[1]: Created slice kubepods.slice. Feb 13 07:16:23.880038 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 07:16:23.881880 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 07:16:23.896955 kubelet[1848]: I0213 07:16:23.896898 1848 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 07:16:23.897070 kubelet[1848]: I0213 07:16:23.897022 1848 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 07:16:23.897310 kubelet[1848]: E0213 07:16:23.897300 1848 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.31\" not found" Feb 13 07:16:23.964919 kubelet[1848]: I0213 07:16:23.964892 1848 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.31" Feb 13 07:16:24.019783 kubelet[1848]: I0213 07:16:24.019767 1848 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 07:16:24.035981 kubelet[1848]: I0213 07:16:24.035962 1848 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 07:16:24.035981 kubelet[1848]: I0213 07:16:24.035982 1848 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 07:16:24.036101 kubelet[1848]: I0213 07:16:24.035997 1848 kubelet.go:2113] "Starting kubelet main sync loop" Feb 13 07:16:24.036101 kubelet[1848]: E0213 07:16:24.036039 1848 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 07:16:24.059990 kubelet[1848]: I0213 07:16:24.059920 1848 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.31" Feb 13 07:16:24.074908 kubelet[1848]: I0213 07:16:24.074804 1848 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 07:16:24.075560 env[1473]: time="2024-02-13T07:16:24.075434276Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 07:16:24.076560 kubelet[1848]: I0213 07:16:24.075982 1848 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 07:16:24.854634 kubelet[1848]: E0213 07:16:24.854534 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:24.854634 kubelet[1848]: I0213 07:16:24.854552 1848 apiserver.go:52] "Watching apiserver" Feb 13 07:16:25.057161 kubelet[1848]: I0213 07:16:25.057069 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:16:25.057490 kubelet[1848]: I0213 07:16:25.057367 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:16:25.066631 kubelet[1848]: I0213 07:16:25.066556 1848 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 07:16:25.073063 systemd[1]: Created slice kubepods-besteffort-pod7a2bbe42_fbb0_474d_981a_296309708f25.slice. Feb 13 07:16:25.073726 kubelet[1848]: I0213 07:16:25.073183 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqrs\" (UniqueName: \"kubernetes.io/projected/7a2bbe42-fbb0-474d-981a-296309708f25-kube-api-access-wrqrs\") pod \"kube-proxy-qwkvs\" (UID: \"7a2bbe42-fbb0-474d-981a-296309708f25\") " pod="kube-system/kube-proxy-qwkvs" Feb 13 07:16:25.073726 kubelet[1848]: I0213 07:16:25.073288 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cni-path\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.073726 kubelet[1848]: I0213 07:16:25.073517 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-lib-modules\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.073726 kubelet[1848]: I0213 07:16:25.073650 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-config-path\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074161 kubelet[1848]: I0213 07:16:25.073744 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-net\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074161 kubelet[1848]: I0213 07:16:25.073853 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a2bbe42-fbb0-474d-981a-296309708f25-xtables-lock\") pod \"kube-proxy-qwkvs\" (UID: \"7a2bbe42-fbb0-474d-981a-296309708f25\") " pod="kube-system/kube-proxy-qwkvs" Feb 13 07:16:25.074161 kubelet[1848]: I0213 07:16:25.074041 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a2bbe42-fbb0-474d-981a-296309708f25-lib-modules\") pod \"kube-proxy-qwkvs\" (UID: \"7a2bbe42-fbb0-474d-981a-296309708f25\") " pod="kube-system/kube-proxy-qwkvs" Feb 13 07:16:25.074485 kubelet[1848]: I0213 07:16:25.074176 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-run\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074485 kubelet[1848]: I0213 07:16:25.074337 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-etc-cni-netd\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074686 kubelet[1848]: I0213 07:16:25.074545 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k64l\" (UniqueName: \"kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-kube-api-access-4k64l\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074686 kubelet[1848]: I0213 07:16:25.074640 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-bpf-maps\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074889 kubelet[1848]: I0213 07:16:25.074706 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hostproc\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074889 kubelet[1848]: I0213 07:16:25.074795 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-cgroup\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.074889 kubelet[1848]: I0213 07:16:25.074864 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hubble-tls\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.075192 kubelet[1848]: I0213 07:16:25.074927 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a2bbe42-fbb0-474d-981a-296309708f25-kube-proxy\") pod \"kube-proxy-qwkvs\" (UID: \"7a2bbe42-fbb0-474d-981a-296309708f25\") " pod="kube-system/kube-proxy-qwkvs" Feb 13 07:16:25.075192 kubelet[1848]: I0213 07:16:25.075091 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-xtables-lock\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.075390 kubelet[1848]: I0213 07:16:25.075222 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/987bdf09-d1e1-4223-93b7-ba2e9318f38f-clustermesh-secrets\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.075390 kubelet[1848]: I0213 07:16:25.075293 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-kernel\") pod \"cilium-2q9px\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " pod="kube-system/cilium-2q9px" Feb 13 07:16:25.075390 kubelet[1848]: I0213 07:16:25.075336 1848 reconciler.go:41] "Reconciler: start to sync state" Feb 13 07:16:25.124781 systemd[1]: Created slice kubepods-burstable-pod987bdf09_d1e1_4223_93b7_ba2e9318f38f.slice. Feb 13 07:16:25.183745 sudo[1604]: pam_unix(sudo:session): session closed for user root Feb 13 07:16:25.188710 sshd[1600]: pam_unix(sshd:session): session closed for user core Feb 13 07:16:25.194747 systemd[1]: sshd@6-139.178.90.101:22-139.178.68.195:40878.service: Deactivated successfully. Feb 13 07:16:25.196824 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 07:16:25.198884 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Feb 13 07:16:25.201139 systemd-logind[1462]: Removed session 9. Feb 13 07:16:25.854858 kubelet[1848]: E0213 07:16:25.854743 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:26.178639 kubelet[1848]: E0213 07:16:26.178429 1848 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 07:16:26.178891 kubelet[1848]: E0213 07:16:26.178646 1848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7a2bbe42-fbb0-474d-981a-296309708f25-kube-proxy podName:7a2bbe42-fbb0-474d-981a-296309708f25 nodeName:}" failed. No retries permitted until 2024-02-13 07:16:26.678567904 +0000 UTC m=+3.034861029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/7a2bbe42-fbb0-474d-981a-296309708f25-kube-proxy") pod "kube-proxy-qwkvs" (UID: "7a2bbe42-fbb0-474d-981a-296309708f25") : failed to sync configmap cache: timed out waiting for the condition Feb 13 07:16:26.254655 kubelet[1848]: I0213 07:16:26.254551 1848 request.go:690] Waited for 1.196158517s due to client-side throttling, not priority and fairness, request: GET:https://139.178.89.111:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0 Feb 13 07:16:26.636088 env[1473]: time="2024-02-13T07:16:26.635946723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2q9px,Uid:987bdf09-d1e1-4223-93b7-ba2e9318f38f,Namespace:kube-system,Attempt:0,}" Feb 13 07:16:26.855561 kubelet[1848]: E0213 07:16:26.855505 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:26.919631 env[1473]: time="2024-02-13T07:16:26.919429305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwkvs,Uid:7a2bbe42-fbb0-474d-981a-296309708f25,Namespace:kube-system,Attempt:0,}" Feb 13 07:16:27.301567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776369554.mount: Deactivated successfully. Feb 13 07:16:27.303564 env[1473]: time="2024-02-13T07:16:27.303509984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.304780 env[1473]: time="2024-02-13T07:16:27.304739696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.305409 env[1473]: time="2024-02-13T07:16:27.305346197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.306111 env[1473]: time="2024-02-13T07:16:27.306072178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.306518 env[1473]: time="2024-02-13T07:16:27.306478216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.307657 env[1473]: time="2024-02-13T07:16:27.307616921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.308016 env[1473]: time="2024-02-13T07:16:27.307971071Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.309142 env[1473]: time="2024-02-13T07:16:27.309103718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:27.317231 env[1473]: time="2024-02-13T07:16:27.317197745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:27.317231 env[1473]: time="2024-02-13T07:16:27.317218881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:27.317231 env[1473]: time="2024-02-13T07:16:27.317228858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:27.317360 env[1473]: time="2024-02-13T07:16:27.317291873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b209dcc13ddffc9db26fb5491957f94f57c90d97e759aafdbf574f485c0d6d pid=1960 runtime=io.containerd.runc.v2 Feb 13 07:16:27.317360 env[1473]: time="2024-02-13T07:16:27.317311187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:27.317360 env[1473]: time="2024-02-13T07:16:27.317331962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:27.317360 env[1473]: time="2024-02-13T07:16:27.317339567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:27.317446 env[1473]: time="2024-02-13T07:16:27.317399073Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33 pid=1961 runtime=io.containerd.runc.v2 Feb 13 07:16:27.324240 systemd[1]: Started cri-containerd-52b209dcc13ddffc9db26fb5491957f94f57c90d97e759aafdbf574f485c0d6d.scope. Feb 13 07:16:27.325169 systemd[1]: Started cri-containerd-7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33.scope. Feb 13 07:16:27.334682 env[1473]: time="2024-02-13T07:16:27.334654078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2q9px,Uid:987bdf09-d1e1-4223-93b7-ba2e9318f38f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\"" Feb 13 07:16:27.334833 env[1473]: time="2024-02-13T07:16:27.334814502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwkvs,Uid:7a2bbe42-fbb0-474d-981a-296309708f25,Namespace:kube-system,Attempt:0,} returns sandbox id \"52b209dcc13ddffc9db26fb5491957f94f57c90d97e759aafdbf574f485c0d6d\"" Feb 13 07:16:27.335595 env[1473]: time="2024-02-13T07:16:27.335580996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 13 07:16:27.855903 kubelet[1848]: E0213 07:16:27.855880 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:28.184015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685369268.mount: Deactivated successfully. Feb 13 07:16:28.475234 env[1473]: time="2024-02-13T07:16:28.475181837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:28.475823 env[1473]: time="2024-02-13T07:16:28.475812634Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:28.476526 env[1473]: time="2024-02-13T07:16:28.476463478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:28.477236 env[1473]: time="2024-02-13T07:16:28.477223941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:28.477593 env[1473]: time="2024-02-13T07:16:28.477549709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 13 07:16:28.478088 env[1473]: time="2024-02-13T07:16:28.478069151Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 07:16:28.478728 env[1473]: time="2024-02-13T07:16:28.478715602Z" level=info msg="CreateContainer within sandbox \"52b209dcc13ddffc9db26fb5491957f94f57c90d97e759aafdbf574f485c0d6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 07:16:28.484158 env[1473]: time="2024-02-13T07:16:28.484110441Z" level=info msg="CreateContainer within sandbox \"52b209dcc13ddffc9db26fb5491957f94f57c90d97e759aafdbf574f485c0d6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ecb54185740234440b8ee07bfc0ba242a7820213d4bb4c0250bd8f8d986eb3e\"" Feb 13 07:16:28.484522 env[1473]: time="2024-02-13T07:16:28.484477751Z" level=info msg="StartContainer for \"6ecb54185740234440b8ee07bfc0ba242a7820213d4bb4c0250bd8f8d986eb3e\"" Feb 13 07:16:28.493247 systemd[1]: Started cri-containerd-6ecb54185740234440b8ee07bfc0ba242a7820213d4bb4c0250bd8f8d986eb3e.scope. Feb 13 07:16:28.506761 env[1473]: time="2024-02-13T07:16:28.506735719Z" level=info msg="StartContainer for \"6ecb54185740234440b8ee07bfc0ba242a7820213d4bb4c0250bd8f8d986eb3e\" returns successfully" Feb 13 07:16:28.856879 kubelet[1848]: E0213 07:16:28.856757 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:29.059226 kubelet[1848]: I0213 07:16:29.059134 1848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qwkvs" podStartSLOduration=-9.223372031795725e+09 pod.CreationTimestamp="2024-02-13 07:16:24 +0000 UTC" firstStartedPulling="2024-02-13 07:16:27.335366057 +0000 UTC m=+3.691659134" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:16:29.05866172 +0000 UTC m=+5.414954869" watchObservedRunningTime="2024-02-13 07:16:29.059050066 +0000 UTC m=+5.415343216" Feb 13 07:16:29.857036 kubelet[1848]: E0213 07:16:29.856926 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:30.857679 kubelet[1848]: E0213 07:16:30.857664 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:31.858263 kubelet[1848]: E0213 07:16:31.858249 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:32.775357 systemd-resolved[1417]: Clock change detected. Flushing caches. Feb 13 07:16:32.775380 systemd-timesyncd[1418]: Contacted time server [2605:9880:200:600:35:ddc:8154:8]:123 (2.flatcar.pool.ntp.org). Feb 13 07:16:32.775408 systemd-timesyncd[1418]: Initial clock synchronization to Tue 2024-02-13 07:16:32.775284 UTC. Feb 13 07:16:33.059062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670095753.mount: Deactivated successfully. Feb 13 07:16:33.767712 kubelet[1848]: E0213 07:16:33.767667 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:34.730892 env[1473]: time="2024-02-13T07:16:34.730839415Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:34.732115 env[1473]: time="2024-02-13T07:16:34.732052739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:34.734047 env[1473]: time="2024-02-13T07:16:34.733986494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:34.734953 env[1473]: time="2024-02-13T07:16:34.734886664Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 07:16:34.736850 env[1473]: time="2024-02-13T07:16:34.736787214Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:16:34.740954 kubelet[1848]: I0213 07:16:34.740902 1848 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 07:16:34.744451 env[1473]: time="2024-02-13T07:16:34.744385056Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\"" Feb 13 07:16:34.744869 env[1473]: time="2024-02-13T07:16:34.744807155Z" level=info msg="StartContainer for \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\"" Feb 13 07:16:34.759910 systemd[1]: Started cri-containerd-81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30.scope. Feb 13 07:16:34.767777 kubelet[1848]: E0213 07:16:34.767758 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:34.772206 env[1473]: time="2024-02-13T07:16:34.772137172Z" level=info msg="StartContainer for \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\" returns successfully" Feb 13 07:16:34.777180 systemd[1]: cri-containerd-81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30.scope: Deactivated successfully. Feb 13 07:16:35.744595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30-rootfs.mount: Deactivated successfully. Feb 13 07:16:35.768750 kubelet[1848]: E0213 07:16:35.768688 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:36.072781 env[1473]: time="2024-02-13T07:16:36.072499571Z" level=info msg="shim disconnected" id=81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30 Feb 13 07:16:36.072781 env[1473]: time="2024-02-13T07:16:36.072641614Z" level=warning msg="cleaning up after shim disconnected" id=81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30 namespace=k8s.io Feb 13 07:16:36.072781 env[1473]: time="2024-02-13T07:16:36.072673180Z" level=info msg="cleaning up dead shim" Feb 13 07:16:36.088011 env[1473]: time="2024-02-13T07:16:36.087902454Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2226 runtime=io.containerd.runc.v2\n" Feb 13 07:16:36.768936 kubelet[1848]: E0213 07:16:36.768871 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:36.980135 env[1473]: time="2024-02-13T07:16:36.980005389Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:16:36.996349 env[1473]: time="2024-02-13T07:16:36.996327717Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\"" Feb 13 07:16:36.996509 env[1473]: time="2024-02-13T07:16:36.996495667Z" level=info msg="StartContainer for \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\"" Feb 13 07:16:37.004616 systemd[1]: Started cri-containerd-059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa.scope. Feb 13 07:16:37.023182 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:16:37.023425 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:16:37.023534 systemd[1]: Stopping systemd-sysctl.service... Feb 13 07:16:37.024441 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:16:37.025502 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 07:16:37.027335 systemd[1]: cri-containerd-059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa.scope: Deactivated successfully. Feb 13 07:16:37.027407 env[1473]: time="2024-02-13T07:16:37.027320736Z" level=info msg="StartContainer for \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\" returns successfully" Feb 13 07:16:37.028810 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:16:37.038174 env[1473]: time="2024-02-13T07:16:37.038129470Z" level=info msg="shim disconnected" id=059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa Feb 13 07:16:37.038174 env[1473]: time="2024-02-13T07:16:37.038161076Z" level=warning msg="cleaning up after shim disconnected" id=059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa namespace=k8s.io Feb 13 07:16:37.038174 env[1473]: time="2024-02-13T07:16:37.038168858Z" level=info msg="cleaning up dead shim" Feb 13 07:16:37.042523 env[1473]: time="2024-02-13T07:16:37.042504822Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2288 runtime=io.containerd.runc.v2\n" Feb 13 07:16:37.769736 kubelet[1848]: E0213 07:16:37.769613 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:37.986834 env[1473]: time="2024-02-13T07:16:37.986690768Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:16:37.995265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa-rootfs.mount: Deactivated successfully. Feb 13 07:16:37.998825 env[1473]: time="2024-02-13T07:16:37.998777941Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\"" Feb 13 07:16:37.999086 env[1473]: time="2024-02-13T07:16:37.999039152Z" level=info msg="StartContainer for \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\"" Feb 13 07:16:38.008028 systemd[1]: Started cri-containerd-ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98.scope. Feb 13 07:16:38.021324 env[1473]: time="2024-02-13T07:16:38.021268487Z" level=info msg="StartContainer for \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\" returns successfully" Feb 13 07:16:38.022694 systemd[1]: cri-containerd-ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98.scope: Deactivated successfully. Feb 13 07:16:38.054999 env[1473]: time="2024-02-13T07:16:38.054864699Z" level=info msg="shim disconnected" id=ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98 Feb 13 07:16:38.055419 env[1473]: time="2024-02-13T07:16:38.054998912Z" level=warning msg="cleaning up after shim disconnected" id=ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98 namespace=k8s.io Feb 13 07:16:38.055419 env[1473]: time="2024-02-13T07:16:38.055030522Z" level=info msg="cleaning up dead shim" Feb 13 07:16:38.071840 env[1473]: time="2024-02-13T07:16:38.071720858Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2344 runtime=io.containerd.runc.v2\n" Feb 13 07:16:38.770668 kubelet[1848]: E0213 07:16:38.770593 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:38.994427 env[1473]: time="2024-02-13T07:16:38.994337460Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:16:38.994930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98-rootfs.mount: Deactivated successfully. Feb 13 07:16:39.000543 env[1473]: time="2024-02-13T07:16:39.000526162Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\"" Feb 13 07:16:39.000932 env[1473]: time="2024-02-13T07:16:39.000879187Z" level=info msg="StartContainer for \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\"" Feb 13 07:16:39.009137 systemd[1]: Started cri-containerd-20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4.scope. Feb 13 07:16:39.020409 env[1473]: time="2024-02-13T07:16:39.020355216Z" level=info msg="StartContainer for \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\" returns successfully" Feb 13 07:16:39.020642 systemd[1]: cri-containerd-20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4.scope: Deactivated successfully. Feb 13 07:16:39.030218 env[1473]: time="2024-02-13T07:16:39.030155201Z" level=info msg="shim disconnected" id=20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4 Feb 13 07:16:39.030218 env[1473]: time="2024-02-13T07:16:39.030185623Z" level=warning msg="cleaning up after shim disconnected" id=20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4 namespace=k8s.io Feb 13 07:16:39.030218 env[1473]: time="2024-02-13T07:16:39.030195085Z" level=info msg="cleaning up dead shim" Feb 13 07:16:39.034374 env[1473]: time="2024-02-13T07:16:39.034350891Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\n" Feb 13 07:16:39.771789 kubelet[1848]: E0213 07:16:39.771662 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:39.995984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4-rootfs.mount: Deactivated successfully. Feb 13 07:16:39.998473 env[1473]: time="2024-02-13T07:16:39.998454409Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:16:40.004009 env[1473]: time="2024-02-13T07:16:40.003964880Z" level=info msg="CreateContainer within sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\"" Feb 13 07:16:40.004205 env[1473]: time="2024-02-13T07:16:40.004169996Z" level=info msg="StartContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\"" Feb 13 07:16:40.012766 systemd[1]: Started cri-containerd-b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4.scope. Feb 13 07:16:40.028177 env[1473]: time="2024-02-13T07:16:40.028111303Z" level=info msg="StartContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" returns successfully" Feb 13 07:16:40.093566 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:16:40.119195 kubelet[1848]: I0213 07:16:40.119182 1848 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 07:16:40.236575 kernel: Initializing XFRM netlink socket Feb 13 07:16:40.249615 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:16:40.772124 kubelet[1848]: E0213 07:16:40.772005 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:41.772812 kubelet[1848]: E0213 07:16:41.772695 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:41.851160 systemd-networkd[1318]: cilium_host: Link UP Feb 13 07:16:41.851241 systemd-networkd[1318]: cilium_net: Link UP Feb 13 07:16:41.858353 systemd-networkd[1318]: cilium_net: Gained carrier Feb 13 07:16:41.865495 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 13 07:16:41.865560 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 07:16:41.865574 systemd-networkd[1318]: cilium_host: Gained carrier Feb 13 07:16:41.910883 systemd-networkd[1318]: cilium_vxlan: Link UP Feb 13 07:16:41.910886 systemd-networkd[1318]: cilium_vxlan: Gained carrier Feb 13 07:16:42.049609 kernel: NET: Registered PF_ALG protocol family Feb 13 07:16:42.089657 systemd[1]: Started sshd@7-139.178.90.101:22-43.156.7.94:35544.service. Feb 13 07:16:42.261862 kubelet[1848]: I0213 07:16:42.261816 1848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2q9px" podStartSLOduration=-9.223372018592997e+09 pod.CreationTimestamp="2024-02-13 07:16:24 +0000 UTC" firstStartedPulling="2024-02-13 07:16:27.335435416 +0000 UTC m=+3.691728488" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:16:41.039326087 +0000 UTC m=+16.486440500" watchObservedRunningTime="2024-02-13 07:16:42.261779452 +0000 UTC m=+17.708893802" Feb 13 07:16:42.262064 kubelet[1848]: I0213 07:16:42.262025 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:16:42.266040 systemd[1]: Created slice kubepods-besteffort-podd858708b_98d5_4617_9d48_cd79433d616c.slice. Feb 13 07:16:42.292903 kubelet[1848]: I0213 07:16:42.292877 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56xt\" (UniqueName: \"kubernetes.io/projected/d858708b-98d5-4617-9d48-cd79433d616c-kube-api-access-m56xt\") pod \"nginx-deployment-8ffc5cf85-7q5t6\" (UID: \"d858708b-98d5-4617-9d48-cd79433d616c\") " pod="default/nginx-deployment-8ffc5cf85-7q5t6" Feb 13 07:16:42.360685 systemd-networkd[1318]: cilium_net: Gained IPv6LL Feb 13 07:16:42.548289 systemd-networkd[1318]: lxc_health: Link UP Feb 13 07:16:42.568136 env[1473]: time="2024-02-13T07:16:42.568084461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-7q5t6,Uid:d858708b-98d5-4617-9d48-cd79433d616c,Namespace:default,Attempt:0,}" Feb 13 07:16:42.570432 systemd-networkd[1318]: lxc_health: Gained carrier Feb 13 07:16:42.570558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:16:42.773602 kubelet[1848]: E0213 07:16:42.773580 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:42.824632 systemd-networkd[1318]: cilium_host: Gained IPv6LL Feb 13 07:16:43.095672 systemd-networkd[1318]: lxc9ebc9fa7d5f6: Link UP Feb 13 07:16:43.102638 sshd[2718]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.156.7.94 user=root Feb 13 07:16:43.116560 kernel: eth0: renamed from tmpcb934 Feb 13 07:16:43.155684 systemd-networkd[1318]: cilium_vxlan: Gained IPv6LL Feb 13 07:16:43.170642 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:16:43.170816 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9ebc9fa7d5f6: link becomes ready Feb 13 07:16:43.171100 systemd-networkd[1318]: lxc9ebc9fa7d5f6: Gained carrier Feb 13 07:16:43.774227 kubelet[1848]: E0213 07:16:43.774170 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:44.360718 systemd-networkd[1318]: lxc_health: Gained IPv6LL Feb 13 07:16:44.763540 kubelet[1848]: E0213 07:16:44.763508 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:44.774677 kubelet[1848]: E0213 07:16:44.774667 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:45.000686 systemd-networkd[1318]: lxc9ebc9fa7d5f6: Gained IPv6LL Feb 13 07:16:45.294617 sshd[2718]: Failed password for root from 43.156.7.94 port 35544 ssh2 Feb 13 07:16:45.465060 env[1473]: time="2024-02-13T07:16:45.464933095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:45.465060 env[1473]: time="2024-02-13T07:16:45.464992515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:45.465060 env[1473]: time="2024-02-13T07:16:45.465014870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:45.465675 env[1473]: time="2024-02-13T07:16:45.465309945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb934c36778c60581435b2391edc0146e02b88084a433a2d1908f3b1813d1310 pid=3049 runtime=io.containerd.runc.v2 Feb 13 07:16:45.486242 systemd[1]: Started cri-containerd-cb934c36778c60581435b2391edc0146e02b88084a433a2d1908f3b1813d1310.scope. Feb 13 07:16:45.542444 env[1473]: time="2024-02-13T07:16:45.542403381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-7q5t6,Uid:d858708b-98d5-4617-9d48-cd79433d616c,Namespace:default,Attempt:0,} returns sandbox id \"cb934c36778c60581435b2391edc0146e02b88084a433a2d1908f3b1813d1310\"" Feb 13 07:16:45.543446 env[1473]: time="2024-02-13T07:16:45.543423577Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 07:16:45.775102 kubelet[1848]: E0213 07:16:45.774990 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:45.894068 update_engine[1464]: I0213 07:16:45.893983 1464 update_attempter.cc:509] Updating boot flags... Feb 13 07:16:46.519470 sshd[2718]: Received disconnect from 43.156.7.94 port 35544:11: Bye Bye [preauth] Feb 13 07:16:46.519470 sshd[2718]: Disconnected from authenticating user root 43.156.7.94 port 35544 [preauth] Feb 13 07:16:46.522062 systemd[1]: sshd@7-139.178.90.101:22-43.156.7.94:35544.service: Deactivated successfully. Feb 13 07:16:46.775785 kubelet[1848]: E0213 07:16:46.775594 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:47.776594 kubelet[1848]: E0213 07:16:47.776484 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:48.666012 systemd[1]: Started sshd@8-139.178.90.101:22-46.101.146.252:41496.service. Feb 13 07:16:48.776961 kubelet[1848]: E0213 07:16:48.776851 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:49.560397 sshd[3097]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.101.146.252 user=root Feb 13 07:16:49.777271 kubelet[1848]: E0213 07:16:49.777162 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:50.777966 kubelet[1848]: E0213 07:16:50.777885 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:51.779075 kubelet[1848]: E0213 07:16:51.779003 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:52.235060 sshd[3097]: Failed password for root from 46.101.146.252 port 41496 ssh2 Feb 13 07:16:52.779667 kubelet[1848]: E0213 07:16:52.779582 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:52.946118 sshd[3097]: Received disconnect from 46.101.146.252 port 41496:11: Bye Bye [preauth] Feb 13 07:16:52.946118 sshd[3097]: Disconnected from authenticating user root 46.101.146.252 port 41496 [preauth] Feb 13 07:16:52.948976 systemd[1]: sshd@8-139.178.90.101:22-46.101.146.252:41496.service: Deactivated successfully. Feb 13 07:16:53.780535 kubelet[1848]: E0213 07:16:53.780415 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:54.781911 kubelet[1848]: E0213 07:16:54.781781 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:55.782709 kubelet[1848]: E0213 07:16:55.782631 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:56.783933 kubelet[1848]: E0213 07:16:56.783853 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:57.785136 kubelet[1848]: E0213 07:16:57.785012 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:58.785741 kubelet[1848]: E0213 07:16:58.785622 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:16:59.786252 kubelet[1848]: E0213 07:16:59.786180 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:00.786757 kubelet[1848]: E0213 07:17:00.786677 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:01.787186 kubelet[1848]: E0213 07:17:01.787110 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:02.788099 kubelet[1848]: E0213 07:17:02.788030 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:03.788611 kubelet[1848]: E0213 07:17:03.788519 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:04.763803 kubelet[1848]: E0213 07:17:04.763730 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:04.789258 kubelet[1848]: E0213 07:17:04.789183 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:05.789973 kubelet[1848]: E0213 07:17:05.789846 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:06.790814 kubelet[1848]: E0213 07:17:06.790698 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:07.791415 kubelet[1848]: E0213 07:17:07.791293 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:08.792592 kubelet[1848]: E0213 07:17:08.792460 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:09.793260 kubelet[1848]: E0213 07:17:09.793140 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:10.793985 kubelet[1848]: E0213 07:17:10.793872 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:11.794943 kubelet[1848]: E0213 07:17:11.794834 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:12.796037 kubelet[1848]: E0213 07:17:12.795923 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:13.797294 kubelet[1848]: E0213 07:17:13.797174 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:14.798536 kubelet[1848]: E0213 07:17:14.798425 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:15.798753 kubelet[1848]: E0213 07:17:15.798647 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:16.799350 kubelet[1848]: E0213 07:17:16.799245 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:17.799838 kubelet[1848]: E0213 07:17:17.799726 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:18.800974 kubelet[1848]: E0213 07:17:18.800873 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:19.801684 kubelet[1848]: E0213 07:17:19.801615 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:20.802549 kubelet[1848]: E0213 07:17:20.802423 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:21.802652 kubelet[1848]: E0213 07:17:21.802580 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:22.803596 kubelet[1848]: E0213 07:17:22.803469 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:23.803974 kubelet[1848]: E0213 07:17:23.803854 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:24.763727 kubelet[1848]: E0213 07:17:24.763657 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:24.804659 kubelet[1848]: E0213 07:17:24.804537 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:25.804960 kubelet[1848]: E0213 07:17:25.804829 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:26.805345 kubelet[1848]: E0213 07:17:26.805290 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:27.806514 kubelet[1848]: E0213 07:17:27.806397 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:28.807346 kubelet[1848]: E0213 07:17:28.807163 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:29.807542 kubelet[1848]: E0213 07:17:29.807463 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:30.808533 kubelet[1848]: E0213 07:17:30.808407 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:31.809741 kubelet[1848]: E0213 07:17:31.809620 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:32.810374 kubelet[1848]: E0213 07:17:32.810262 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:33.811319 kubelet[1848]: E0213 07:17:33.811213 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:34.811491 kubelet[1848]: E0213 07:17:34.811380 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:35.812087 kubelet[1848]: E0213 07:17:35.811978 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:36.812458 kubelet[1848]: E0213 07:17:36.812378 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:37.813651 kubelet[1848]: E0213 07:17:37.813571 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:38.814293 kubelet[1848]: E0213 07:17:38.814180 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:39.814776 kubelet[1848]: E0213 07:17:39.814666 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:40.815270 kubelet[1848]: E0213 07:17:40.815163 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:41.815435 kubelet[1848]: E0213 07:17:41.815320 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:42.816291 kubelet[1848]: E0213 07:17:42.816181 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:43.817544 kubelet[1848]: E0213 07:17:43.817420 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:44.763510 kubelet[1848]: E0213 07:17:44.763400 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:44.818779 kubelet[1848]: E0213 07:17:44.818668 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:45.819768 kubelet[1848]: E0213 07:17:45.819662 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:46.820610 kubelet[1848]: E0213 07:17:46.820491 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:47.821346 kubelet[1848]: E0213 07:17:47.821228 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:48.822345 kubelet[1848]: E0213 07:17:48.822230 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:49.822827 kubelet[1848]: E0213 07:17:49.822717 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:50.823609 kubelet[1848]: E0213 07:17:50.823501 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:51.824981 kubelet[1848]: E0213 07:17:51.824732 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:52.825821 kubelet[1848]: E0213 07:17:52.825698 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:53.826610 kubelet[1848]: E0213 07:17:53.826503 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:54.826913 kubelet[1848]: E0213 07:17:54.826805 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:55.827823 kubelet[1848]: E0213 07:17:55.827699 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:56.828904 kubelet[1848]: E0213 07:17:56.828783 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:57.829853 kubelet[1848]: E0213 07:17:57.829734 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:58.830732 kubelet[1848]: E0213 07:17:58.830661 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:17:59.831013 kubelet[1848]: E0213 07:17:59.830902 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:00.831379 kubelet[1848]: E0213 07:18:00.831310 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:01.831816 kubelet[1848]: E0213 07:18:01.831710 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:02.831936 kubelet[1848]: E0213 07:18:02.831855 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:03.832310 kubelet[1848]: E0213 07:18:03.832204 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:04.763329 kubelet[1848]: E0213 07:18:04.763263 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:04.832788 kubelet[1848]: E0213 07:18:04.832707 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:05.833937 kubelet[1848]: E0213 07:18:05.833853 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:06.834639 kubelet[1848]: E0213 07:18:06.834541 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:07.834890 kubelet[1848]: E0213 07:18:07.834780 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:08.835512 kubelet[1848]: E0213 07:18:08.835400 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:09.835875 kubelet[1848]: E0213 07:18:09.835817 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:09.979185 systemd[1]: Started sshd@9-139.178.90.101:22-1.117.181.161:37710.service. Feb 13 07:18:10.836962 kubelet[1848]: E0213 07:18:10.836837 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:11.486912 sshd[3113]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=1.117.181.161 user=root Feb 13 07:18:11.837262 kubelet[1848]: E0213 07:18:11.837020 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:12.837579 kubelet[1848]: E0213 07:18:12.837503 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:13.230772 systemd[1]: Started sshd@10-139.178.90.101:22-43.156.7.94:51892.service. Feb 13 07:18:13.838361 kubelet[1848]: E0213 07:18:13.838250 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:14.086055 sshd[3113]: Failed password for root from 1.117.181.161 port 37710 ssh2 Feb 13 07:18:14.245607 sshd[3116]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.156.7.94 user=root Feb 13 07:18:14.838418 kubelet[1848]: E0213 07:18:14.838360 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:14.863740 sshd[3113]: Received disconnect from 1.117.181.161 port 37710:11: Bye Bye [preauth] Feb 13 07:18:14.863740 sshd[3113]: Disconnected from authenticating user root 1.117.181.161 port 37710 [preauth] Feb 13 07:18:14.866293 systemd[1]: sshd@9-139.178.90.101:22-1.117.181.161:37710.service: Deactivated successfully. Feb 13 07:18:15.590368 sshd[3116]: Failed password for root from 43.156.7.94 port 51892 ssh2 Feb 13 07:18:15.839603 kubelet[1848]: E0213 07:18:15.839477 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:16.038685 sshd[3116]: Received disconnect from 43.156.7.94 port 51892:11: Bye Bye [preauth] Feb 13 07:18:16.038685 sshd[3116]: Disconnected from authenticating user root 43.156.7.94 port 51892 [preauth] Feb 13 07:18:16.041247 systemd[1]: sshd@10-139.178.90.101:22-43.156.7.94:51892.service: Deactivated successfully. Feb 13 07:18:16.839831 kubelet[1848]: E0213 07:18:16.839709 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:17.840701 kubelet[1848]: E0213 07:18:17.840644 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:18.841344 kubelet[1848]: E0213 07:18:18.841226 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:19.841702 kubelet[1848]: E0213 07:18:19.841597 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:20.841988 kubelet[1848]: E0213 07:18:20.841863 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:21.842970 kubelet[1848]: E0213 07:18:21.842866 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:22.843589 kubelet[1848]: E0213 07:18:22.843479 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:23.844560 kubelet[1848]: E0213 07:18:23.844508 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:24.763597 kubelet[1848]: E0213 07:18:24.763512 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:24.844818 kubelet[1848]: E0213 07:18:24.844710 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:25.845289 kubelet[1848]: E0213 07:18:25.845176 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:26.846168 kubelet[1848]: E0213 07:18:26.846059 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:27.847193 kubelet[1848]: E0213 07:18:27.847071 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:28.847673 kubelet[1848]: E0213 07:18:28.847545 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:29.848213 kubelet[1848]: E0213 07:18:29.848105 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:30.848801 kubelet[1848]: E0213 07:18:30.848673 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:31.849240 kubelet[1848]: E0213 07:18:31.849125 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:32.849785 kubelet[1848]: E0213 07:18:32.849666 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:33.850945 kubelet[1848]: E0213 07:18:33.850823 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:34.852138 kubelet[1848]: E0213 07:18:34.852034 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:35.852749 kubelet[1848]: E0213 07:18:35.852676 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:36.853261 kubelet[1848]: E0213 07:18:36.853191 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:37.854260 kubelet[1848]: E0213 07:18:37.854141 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:38.855317 kubelet[1848]: E0213 07:18:38.855197 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:39.855901 kubelet[1848]: E0213 07:18:39.855816 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:40.856581 kubelet[1848]: E0213 07:18:40.856494 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:41.856778 kubelet[1848]: E0213 07:18:41.856698 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:42.857659 kubelet[1848]: E0213 07:18:42.857585 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:43.858406 kubelet[1848]: E0213 07:18:43.858321 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:44.763898 kubelet[1848]: E0213 07:18:44.763781 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:44.859630 kubelet[1848]: E0213 07:18:44.859544 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:45.859930 kubelet[1848]: E0213 07:18:45.859809 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:46.860739 kubelet[1848]: E0213 07:18:46.860615 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:47.861945 kubelet[1848]: E0213 07:18:47.861814 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:48.863046 kubelet[1848]: E0213 07:18:48.862920 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:49.863228 kubelet[1848]: E0213 07:18:49.863118 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:50.863863 kubelet[1848]: E0213 07:18:50.863753 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:51.864962 kubelet[1848]: E0213 07:18:51.864839 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:52.865544 kubelet[1848]: E0213 07:18:52.865422 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:53.866240 kubelet[1848]: E0213 07:18:53.866130 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:54.866740 kubelet[1848]: E0213 07:18:54.866663 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:55.867298 kubelet[1848]: E0213 07:18:55.867221 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:56.868479 kubelet[1848]: E0213 07:18:56.868353 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:57.868814 kubelet[1848]: E0213 07:18:57.868693 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:58.869698 kubelet[1848]: E0213 07:18:58.869395 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:18:59.870307 kubelet[1848]: E0213 07:18:59.870232 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:00.871482 kubelet[1848]: E0213 07:19:00.871356 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:01.871895 kubelet[1848]: E0213 07:19:01.871792 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:02.262667 systemd[1]: Started sshd@11-139.178.90.101:22-141.98.11.169:60324.service. Feb 13 07:19:02.872778 kubelet[1848]: E0213 07:19:02.872665 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:03.189958 sshd[3127]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:03.873405 kubelet[1848]: E0213 07:19:03.873283 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:04.763578 kubelet[1848]: E0213 07:19:04.763454 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:04.874280 kubelet[1848]: E0213 07:19:04.874163 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:04.925908 sshd[3127]: Failed password for root from 141.98.11.169 port 60324 ssh2 Feb 13 07:19:05.874519 kubelet[1848]: E0213 07:19:05.874393 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:06.591214 sshd[3127]: Connection closed by authenticating user root 141.98.11.169 port 60324 [preauth] Feb 13 07:19:06.593754 systemd[1]: sshd@11-139.178.90.101:22-141.98.11.169:60324.service: Deactivated successfully. Feb 13 07:19:06.766834 systemd[1]: Started sshd@12-139.178.90.101:22-141.98.11.169:41882.service. Feb 13 07:19:06.875710 kubelet[1848]: E0213 07:19:06.875505 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:07.706403 sshd[3131]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:07.876790 kubelet[1848]: E0213 07:19:07.876672 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:08.877381 kubelet[1848]: E0213 07:19:08.877264 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:09.878665 kubelet[1848]: E0213 07:19:09.878461 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:10.325981 sshd[3131]: Failed password for root from 141.98.11.169 port 41882 ssh2 Feb 13 07:19:10.879443 kubelet[1848]: E0213 07:19:10.879322 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:11.129242 sshd[3131]: Connection closed by authenticating user root 141.98.11.169 port 41882 [preauth] Feb 13 07:19:11.130257 systemd[1]: sshd@12-139.178.90.101:22-141.98.11.169:41882.service: Deactivated successfully. Feb 13 07:19:11.290612 systemd[1]: Started sshd@13-139.178.90.101:22-141.98.11.169:39432.service. Feb 13 07:19:11.879980 kubelet[1848]: E0213 07:19:11.879860 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:12.314542 sshd[3135]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:12.314816 sshd[3135]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 07:19:12.880941 kubelet[1848]: E0213 07:19:12.880842 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:13.881197 kubelet[1848]: E0213 07:19:13.881082 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:14.286723 sshd[3135]: Failed password for root from 141.98.11.169 port 39432 ssh2 Feb 13 07:19:14.881992 kubelet[1848]: E0213 07:19:14.881891 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:15.742526 sshd[3135]: Connection closed by authenticating user root 141.98.11.169 port 39432 [preauth] Feb 13 07:19:15.745096 systemd[1]: sshd@13-139.178.90.101:22-141.98.11.169:39432.service: Deactivated successfully. Feb 13 07:19:15.882195 kubelet[1848]: E0213 07:19:15.882084 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:15.943945 systemd[1]: Started sshd@14-139.178.90.101:22-141.98.11.169:43954.service. Feb 13 07:19:16.883326 kubelet[1848]: E0213 07:19:16.883205 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:16.966527 sshd[3139]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:17.884110 kubelet[1848]: E0213 07:19:17.883996 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:18.487166 sshd[3139]: Failed password for root from 141.98.11.169 port 43954 ssh2 Feb 13 07:19:18.755150 sshd[3139]: Connection closed by authenticating user root 141.98.11.169 port 43954 [preauth] Feb 13 07:19:18.757591 systemd[1]: sshd@14-139.178.90.101:22-141.98.11.169:43954.service: Deactivated successfully. Feb 13 07:19:18.885253 kubelet[1848]: E0213 07:19:18.885185 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:18.943410 systemd[1]: Started sshd@15-139.178.90.101:22-141.98.11.169:51174.service. Feb 13 07:19:19.872085 sshd[3143]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:19.885891 kubelet[1848]: E0213 07:19:19.885849 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:20.351404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410853482.mount: Deactivated successfully. Feb 13 07:19:20.874180 env[1473]: time="2024-02-13T07:19:20.874155709Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:20.874794 env[1473]: time="2024-02-13T07:19:20.874757805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:20.875673 env[1473]: time="2024-02-13T07:19:20.875639934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:20.876421 env[1473]: time="2024-02-13T07:19:20.876389008Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:20.876833 env[1473]: time="2024-02-13T07:19:20.876793949Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 13 07:19:20.877898 env[1473]: time="2024-02-13T07:19:20.877855995Z" level=info msg="CreateContainer within sandbox \"cb934c36778c60581435b2391edc0146e02b88084a433a2d1908f3b1813d1310\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 07:19:20.881937 env[1473]: time="2024-02-13T07:19:20.881922363Z" level=info msg="CreateContainer within sandbox \"cb934c36778c60581435b2391edc0146e02b88084a433a2d1908f3b1813d1310\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"08cef448e68bbf288e90388520a1149519f56eafc1f61d11818b41b4f54cfc2d\"" Feb 13 07:19:20.882232 env[1473]: time="2024-02-13T07:19:20.882145614Z" level=info msg="StartContainer for \"08cef448e68bbf288e90388520a1149519f56eafc1f61d11818b41b4f54cfc2d\"" Feb 13 07:19:20.886882 kubelet[1848]: E0213 07:19:20.886839 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:20.890219 systemd[1]: Started cri-containerd-08cef448e68bbf288e90388520a1149519f56eafc1f61d11818b41b4f54cfc2d.scope. Feb 13 07:19:20.901394 env[1473]: time="2024-02-13T07:19:20.901370762Z" level=info msg="StartContainer for \"08cef448e68bbf288e90388520a1149519f56eafc1f61d11818b41b4f54cfc2d\" returns successfully" Feb 13 07:19:21.432646 kubelet[1848]: I0213 07:19:21.432585 1848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-7q5t6" podStartSLOduration=-9.223371877422295e+09 pod.CreationTimestamp="2024-02-13 07:16:42 +0000 UTC" firstStartedPulling="2024-02-13 07:16:45.543212706 +0000 UTC m=+20.990327057" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:19:21.431843524 +0000 UTC m=+176.878957937" watchObservedRunningTime="2024-02-13 07:19:21.432481738 +0000 UTC m=+176.879596131" Feb 13 07:19:21.888039 kubelet[1848]: E0213 07:19:21.887822 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:22.471340 sshd[3143]: Failed password for root from 141.98.11.169 port 51174 ssh2 Feb 13 07:19:22.888451 kubelet[1848]: E0213 07:19:22.888237 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:23.303522 sshd[3143]: Connection closed by authenticating user root 141.98.11.169 port 51174 [preauth] Feb 13 07:19:23.306056 systemd[1]: sshd@15-139.178.90.101:22-141.98.11.169:51174.service: Deactivated successfully. Feb 13 07:19:23.505751 systemd[1]: Started sshd@16-139.178.90.101:22-141.98.11.169:48222.service. Feb 13 07:19:23.888569 kubelet[1848]: E0213 07:19:23.888519 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:24.051351 kubelet[1848]: I0213 07:19:24.051297 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:19:24.061024 systemd[1]: Created slice kubepods-besteffort-podf938aea8_8a63_4bcc_b222_87d38ced1ea4.slice. Feb 13 07:19:24.158794 kubelet[1848]: I0213 07:19:24.158635 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f938aea8-8a63-4bcc-b222-87d38ced1ea4-data\") pod \"nfs-server-provisioner-0\" (UID: \"f938aea8-8a63-4bcc-b222-87d38ced1ea4\") " pod="default/nfs-server-provisioner-0" Feb 13 07:19:24.158794 kubelet[1848]: I0213 07:19:24.158734 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6c5\" (UniqueName: \"kubernetes.io/projected/f938aea8-8a63-4bcc-b222-87d38ced1ea4-kube-api-access-jx6c5\") pod \"nfs-server-provisioner-0\" (UID: \"f938aea8-8a63-4bcc-b222-87d38ced1ea4\") " pod="default/nfs-server-provisioner-0" Feb 13 07:19:24.364112 env[1473]: time="2024-02-13T07:19:24.364010957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f938aea8-8a63-4bcc-b222-87d38ced1ea4,Namespace:default,Attempt:0,}" Feb 13 07:19:24.417610 systemd-networkd[1318]: lxcdc6f97ce53e9: Link UP Feb 13 07:19:24.438859 sshd[3215]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:24.441642 kernel: eth0: renamed from tmp2cf62 Feb 13 07:19:24.473462 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:19:24.473544 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdc6f97ce53e9: link becomes ready Feb 13 07:19:24.473634 systemd-networkd[1318]: lxcdc6f97ce53e9: Gained carrier Feb 13 07:19:24.763577 kubelet[1848]: E0213 07:19:24.763371 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:24.787994 env[1473]: time="2024-02-13T07:19:24.787769632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:19:24.787994 env[1473]: time="2024-02-13T07:19:24.787871054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:19:24.787994 env[1473]: time="2024-02-13T07:19:24.787907004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:19:24.788417 env[1473]: time="2024-02-13T07:19:24.788312744Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cf62829581fae2bf01d62bc82cbce04a4c08d66ac5a6d6d413889cf92f16277 pid=3332 runtime=io.containerd.runc.v2 Feb 13 07:19:24.812427 systemd[1]: Started cri-containerd-2cf62829581fae2bf01d62bc82cbce04a4c08d66ac5a6d6d413889cf92f16277.scope. Feb 13 07:19:24.869673 env[1473]: time="2024-02-13T07:19:24.869638557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f938aea8-8a63-4bcc-b222-87d38ced1ea4,Namespace:default,Attempt:0,} returns sandbox id \"2cf62829581fae2bf01d62bc82cbce04a4c08d66ac5a6d6d413889cf92f16277\"" Feb 13 07:19:24.870440 env[1473]: time="2024-02-13T07:19:24.870424007Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 07:19:24.889827 kubelet[1848]: E0213 07:19:24.889730 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:25.832731 systemd-networkd[1318]: lxcdc6f97ce53e9: Gained IPv6LL Feb 13 07:19:25.890743 kubelet[1848]: E0213 07:19:25.890678 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:26.746359 sshd[3215]: Failed password for root from 141.98.11.169 port 48222 ssh2 Feb 13 07:19:26.891415 kubelet[1848]: E0213 07:19:26.891395 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:27.460092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862435279.mount: Deactivated successfully. Feb 13 07:19:27.858487 sshd[3215]: Connection closed by authenticating user root 141.98.11.169 port 48222 [preauth] Feb 13 07:19:27.859158 systemd[1]: sshd@16-139.178.90.101:22-141.98.11.169:48222.service: Deactivated successfully. Feb 13 07:19:27.892439 kubelet[1848]: E0213 07:19:27.892396 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:28.044071 systemd[1]: Started sshd@17-139.178.90.101:22-141.98.11.169:50548.service. Feb 13 07:19:28.620297 env[1473]: time="2024-02-13T07:19:28.620270660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:28.621307 env[1473]: time="2024-02-13T07:19:28.621279945Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:28.622851 env[1473]: time="2024-02-13T07:19:28.622839007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:28.623731 env[1473]: time="2024-02-13T07:19:28.623719148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:28.624673 env[1473]: time="2024-02-13T07:19:28.624659913Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 07:19:28.625920 env[1473]: time="2024-02-13T07:19:28.625894116Z" level=info msg="CreateContainer within sandbox \"2cf62829581fae2bf01d62bc82cbce04a4c08d66ac5a6d6d413889cf92f16277\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 07:19:28.630228 env[1473]: time="2024-02-13T07:19:28.630213052Z" level=info msg="CreateContainer within sandbox \"2cf62829581fae2bf01d62bc82cbce04a4c08d66ac5a6d6d413889cf92f16277\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f859f742b06afd285fce512f0090329ccf11dcb616d8ae7755eea8f468ce3077\"" Feb 13 07:19:28.630371 env[1473]: time="2024-02-13T07:19:28.630360326Z" level=info msg="StartContainer for \"f859f742b06afd285fce512f0090329ccf11dcb616d8ae7755eea8f468ce3077\"" Feb 13 07:19:28.640147 systemd[1]: Started cri-containerd-f859f742b06afd285fce512f0090329ccf11dcb616d8ae7755eea8f468ce3077.scope. Feb 13 07:19:28.652176 env[1473]: time="2024-02-13T07:19:28.652125662Z" level=info msg="StartContainer for \"f859f742b06afd285fce512f0090329ccf11dcb616d8ae7755eea8f468ce3077\" returns successfully" Feb 13 07:19:28.893588 kubelet[1848]: E0213 07:19:28.893347 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:29.002363 sshd[3371]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:29.455225 kubelet[1848]: I0213 07:19:29.455156 1848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031399706e+09 pod.CreationTimestamp="2024-02-13 07:19:24 +0000 UTC" firstStartedPulling="2024-02-13 07:19:24.870268369 +0000 UTC m=+180.317382713" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:19:29.454926202 +0000 UTC m=+184.902040618" watchObservedRunningTime="2024-02-13 07:19:29.455070043 +0000 UTC m=+184.902184434" Feb 13 07:19:29.893881 kubelet[1848]: E0213 07:19:29.893658 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:30.894125 kubelet[1848]: E0213 07:19:30.894050 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:31.642150 sshd[3371]: Failed password for root from 141.98.11.169 port 50548 ssh2 Feb 13 07:19:31.894999 kubelet[1848]: E0213 07:19:31.894811 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:32.399940 sshd[3371]: Connection closed by authenticating user root 141.98.11.169 port 50548 [preauth] Feb 13 07:19:32.402494 systemd[1]: sshd@17-139.178.90.101:22-141.98.11.169:50548.service: Deactivated successfully. Feb 13 07:19:32.571562 systemd[1]: Started sshd@18-139.178.90.101:22-141.98.11.169:50124.service. Feb 13 07:19:32.895678 kubelet[1848]: E0213 07:19:32.895572 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:33.407300 sshd[3465]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:33.896786 kubelet[1848]: E0213 07:19:33.896666 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:34.897481 kubelet[1848]: E0213 07:19:34.897365 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:35.262828 sshd[3465]: Failed password for root from 141.98.11.169 port 50124 ssh2 Feb 13 07:19:35.898191 kubelet[1848]: E0213 07:19:35.898120 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:36.899491 kubelet[1848]: E0213 07:19:36.899371 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:36.902956 sshd[3465]: Connection closed by authenticating user root 141.98.11.169 port 50124 [preauth] Feb 13 07:19:36.905451 systemd[1]: sshd@18-139.178.90.101:22-141.98.11.169:50124.service: Deactivated successfully. Feb 13 07:19:37.077362 systemd[1]: Started sshd@19-139.178.90.101:22-141.98.11.169:52546.service. Feb 13 07:19:37.900051 kubelet[1848]: E0213 07:19:37.899937 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:37.914283 sshd[3480]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:38.718498 kubelet[1848]: I0213 07:19:38.718432 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:19:38.732303 systemd[1]: Created slice kubepods-besteffort-pod3fcb9a5e_cb44_4336_bdd3_6e472a2a33b9.slice. Feb 13 07:19:38.766252 kubelet[1848]: I0213 07:19:38.766180 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqdkn\" (UniqueName: \"kubernetes.io/projected/3fcb9a5e-cb44-4336-bdd3-6e472a2a33b9-kube-api-access-qqdkn\") pod \"test-pod-1\" (UID: \"3fcb9a5e-cb44-4336-bdd3-6e472a2a33b9\") " pod="default/test-pod-1" Feb 13 07:19:38.766642 kubelet[1848]: I0213 07:19:38.766311 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f5e14ad-346f-4535-b067-65374fcbb034\" (UniqueName: \"kubernetes.io/nfs/3fcb9a5e-cb44-4336-bdd3-6e472a2a33b9-pvc-0f5e14ad-346f-4535-b067-65374fcbb034\") pod \"test-pod-1\" (UID: \"3fcb9a5e-cb44-4336-bdd3-6e472a2a33b9\") " pod="default/test-pod-1" Feb 13 07:19:38.897561 kernel: FS-Cache: Loaded Feb 13 07:19:38.900779 kubelet[1848]: E0213 07:19:38.900766 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:38.932801 kernel: RPC: Registered named UNIX socket transport module. Feb 13 07:19:38.932913 kernel: RPC: Registered udp transport module. Feb 13 07:19:38.932929 kernel: RPC: Registered tcp transport module. Feb 13 07:19:38.937749 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 07:19:38.979604 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 13 07:19:39.105353 kernel: NFS: Registering the id_resolver key type Feb 13 07:19:39.105405 kernel: Key type id_resolver registered Feb 13 07:19:39.105420 kernel: Key type id_legacy registered Feb 13 07:19:39.266536 nfsidmap[3500]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-596fb49211' Feb 13 07:19:39.275312 nfsidmap[3501]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-596fb49211' Feb 13 07:19:39.338038 env[1473]: time="2024-02-13T07:19:39.337969488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3fcb9a5e-cb44-4336-bdd3-6e472a2a33b9,Namespace:default,Attempt:0,}" Feb 13 07:19:39.357005 systemd-networkd[1318]: lxc8cfa427cea89: Link UP Feb 13 07:19:39.373751 kernel: eth0: renamed from tmp9ac85 Feb 13 07:19:39.408304 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:19:39.408393 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8cfa427cea89: link becomes ready Feb 13 07:19:39.408630 systemd-networkd[1318]: lxc8cfa427cea89: Gained carrier Feb 13 07:19:39.591951 env[1473]: time="2024-02-13T07:19:39.591803334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:19:39.591951 env[1473]: time="2024-02-13T07:19:39.591880277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:19:39.591951 env[1473]: time="2024-02-13T07:19:39.591906939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:19:39.592344 env[1473]: time="2024-02-13T07:19:39.592171962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ac8521f975d6e0d90f2e8ed3f779225a06f0db73998e7e72a0ad23345755919 pid=3560 runtime=io.containerd.runc.v2 Feb 13 07:19:39.612246 systemd[1]: Started cri-containerd-9ac8521f975d6e0d90f2e8ed3f779225a06f0db73998e7e72a0ad23345755919.scope. Feb 13 07:19:39.636498 env[1473]: time="2024-02-13T07:19:39.636474301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3fcb9a5e-cb44-4336-bdd3-6e472a2a33b9,Namespace:default,Attempt:0,} returns sandbox id \"9ac8521f975d6e0d90f2e8ed3f779225a06f0db73998e7e72a0ad23345755919\"" Feb 13 07:19:39.637212 env[1473]: time="2024-02-13T07:19:39.637175010Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 07:19:39.901583 kubelet[1848]: E0213 07:19:39.901353 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:39.986589 sshd[3480]: Failed password for root from 141.98.11.169 port 52546 ssh2 Feb 13 07:19:40.045538 env[1473]: time="2024-02-13T07:19:40.045418207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:40.048024 env[1473]: time="2024-02-13T07:19:40.047956175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:40.052573 env[1473]: time="2024-02-13T07:19:40.052451699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:40.057357 env[1473]: time="2024-02-13T07:19:40.057251234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:40.059727 env[1473]: time="2024-02-13T07:19:40.059613706Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 13 07:19:40.063900 env[1473]: time="2024-02-13T07:19:40.063795183Z" level=info msg="CreateContainer within sandbox \"9ac8521f975d6e0d90f2e8ed3f779225a06f0db73998e7e72a0ad23345755919\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 07:19:40.078771 env[1473]: time="2024-02-13T07:19:40.078691739Z" level=info msg="CreateContainer within sandbox \"9ac8521f975d6e0d90f2e8ed3f779225a06f0db73998e7e72a0ad23345755919\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"94906afc840b236a98d32922292952f1d40c7823ae9181233e5a0f2b6f256972\"" Feb 13 07:19:40.079791 env[1473]: time="2024-02-13T07:19:40.079669739Z" level=info msg="StartContainer for \"94906afc840b236a98d32922292952f1d40c7823ae9181233e5a0f2b6f256972\"" Feb 13 07:19:40.093915 systemd[1]: Started cri-containerd-94906afc840b236a98d32922292952f1d40c7823ae9181233e5a0f2b6f256972.scope. Feb 13 07:19:40.106206 env[1473]: time="2024-02-13T07:19:40.106181271Z" level=info msg="StartContainer for \"94906afc840b236a98d32922292952f1d40c7823ae9181233e5a0f2b6f256972\" returns successfully" Feb 13 07:19:40.902438 kubelet[1848]: E0213 07:19:40.902374 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:41.000874 systemd-networkd[1318]: lxc8cfa427cea89: Gained IPv6LL Feb 13 07:19:41.377050 sshd[3480]: Connection closed by authenticating user root 141.98.11.169 port 52546 [preauth] Feb 13 07:19:41.379518 systemd[1]: sshd@19-139.178.90.101:22-141.98.11.169:52546.service: Deactivated successfully. Feb 13 07:19:41.565478 systemd[1]: Started sshd@20-139.178.90.101:22-141.98.11.169:60210.service. Feb 13 07:19:41.902917 kubelet[1848]: E0213 07:19:41.902876 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:42.401840 sshd[3663]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:42.896345 update_engine[1464]: I0213 07:19:42.896229 1464 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 07:19:42.896345 update_engine[1464]: I0213 07:19:42.896310 1464 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 07:19:42.903742 kubelet[1848]: E0213 07:19:42.903636 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:42.905370 update_engine[1464]: I0213 07:19:42.905290 1464 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 07:19:42.906291 update_engine[1464]: I0213 07:19:42.906217 1464 omaha_request_params.cc:62] Current group set to lts Feb 13 07:19:42.906576 update_engine[1464]: I0213 07:19:42.906525 1464 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 07:19:42.906576 update_engine[1464]: I0213 07:19:42.906545 1464 update_attempter.cc:643] Scheduling an action processor start. Feb 13 07:19:42.906809 update_engine[1464]: I0213 07:19:42.906603 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 07:19:42.906809 update_engine[1464]: I0213 07:19:42.906678 1464 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 07:19:42.907252 update_engine[1464]: I0213 07:19:42.906843 1464 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 07:19:42.907252 update_engine[1464]: I0213 07:19:42.906862 1464 omaha_request_action.cc:271] Request: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: Feb 13 07:19:42.907252 update_engine[1464]: I0213 07:19:42.906872 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:19:42.908274 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 07:19:42.910074 update_engine[1464]: I0213 07:19:42.910001 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:19:42.910252 update_engine[1464]: E0213 07:19:42.910223 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:19:42.910478 update_engine[1464]: I0213 07:19:42.910423 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 07:19:43.904196 kubelet[1848]: E0213 07:19:43.904075 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:44.312420 systemd[1]: Started sshd@21-139.178.90.101:22-43.156.7.94:33222.service. Feb 13 07:19:44.762721 kubelet[1848]: E0213 07:19:44.762672 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:44.825683 sshd[3663]: Failed password for root from 141.98.11.169 port 60210 ssh2 Feb 13 07:19:44.904337 kubelet[1848]: E0213 07:19:44.904233 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:45.326085 sshd[3666]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.156.7.94 user=root Feb 13 07:19:45.905306 kubelet[1848]: E0213 07:19:45.905185 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:46.100946 sshd[3663]: Connection closed by authenticating user root 141.98.11.169 port 60210 [preauth] Feb 13 07:19:46.103505 systemd[1]: sshd@20-139.178.90.101:22-141.98.11.169:60210.service: Deactivated successfully. Feb 13 07:19:46.276398 systemd[1]: Started sshd@22-139.178.90.101:22-141.98.11.169:34496.service. Feb 13 07:19:46.486427 kubelet[1848]: I0213 07:19:46.486331 1848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372014368542e+09 pod.CreationTimestamp="2024-02-13 07:19:24 +0000 UTC" firstStartedPulling="2024-02-13 07:19:39.637037327 +0000 UTC m=+195.084151668" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:19:40.486156401 +0000 UTC m=+195.933270853" watchObservedRunningTime="2024-02-13 07:19:46.486233773 +0000 UTC m=+201.933348179" Feb 13 07:19:46.510973 env[1473]: time="2024-02-13T07:19:46.510913252Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:19:46.513822 env[1473]: time="2024-02-13T07:19:46.513809337Z" level=info msg="StopContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" with timeout 1 (s)" Feb 13 07:19:46.513931 env[1473]: time="2024-02-13T07:19:46.513916188Z" level=info msg="Stop container \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" with signal terminated" Feb 13 07:19:46.517135 systemd-networkd[1318]: lxc_health: Link DOWN Feb 13 07:19:46.517138 systemd-networkd[1318]: lxc_health: Lost carrier Feb 13 07:19:46.562003 systemd[1]: cri-containerd-b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4.scope: Deactivated successfully. Feb 13 07:19:46.562154 systemd[1]: cri-containerd-b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4.scope: Consumed 5.604s CPU time. Feb 13 07:19:46.572314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4-rootfs.mount: Deactivated successfully. Feb 13 07:19:46.905958 kubelet[1848]: E0213 07:19:46.905736 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:47.162617 sshd[3666]: Failed password for root from 43.156.7.94 port 33222 ssh2 Feb 13 07:19:47.518431 env[1473]: time="2024-02-13T07:19:47.517971524Z" level=info msg="Kill container \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\"" Feb 13 07:19:47.519814 env[1473]: time="2024-02-13T07:19:47.519718896Z" level=info msg="shim disconnected" id=b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4 Feb 13 07:19:47.520024 env[1473]: time="2024-02-13T07:19:47.519811749Z" level=warning msg="cleaning up after shim disconnected" id=b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4 namespace=k8s.io Feb 13 07:19:47.520024 env[1473]: time="2024-02-13T07:19:47.519842712Z" level=info msg="cleaning up dead shim" Feb 13 07:19:47.532915 env[1473]: time="2024-02-13T07:19:47.532896166Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3712 runtime=io.containerd.runc.v2\n" Feb 13 07:19:47.534072 env[1473]: time="2024-02-13T07:19:47.534037778Z" level=info msg="StopContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" returns successfully" Feb 13 07:19:47.534496 env[1473]: time="2024-02-13T07:19:47.534484135Z" level=info msg="StopPodSandbox for \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\"" Feb 13 07:19:47.534532 env[1473]: time="2024-02-13T07:19:47.534515102Z" level=info msg="Container to stop \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:19:47.534532 env[1473]: time="2024-02-13T07:19:47.534523912Z" level=info msg="Container to stop \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:19:47.534532 env[1473]: time="2024-02-13T07:19:47.534529919Z" level=info msg="Container to stop \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:19:47.534622 env[1473]: time="2024-02-13T07:19:47.534535701Z" level=info msg="Container to stop \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:19:47.534622 env[1473]: time="2024-02-13T07:19:47.534541149Z" level=info msg="Container to stop \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:19:47.535798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33-shm.mount: Deactivated successfully. Feb 13 07:19:47.537750 systemd[1]: cri-containerd-7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33.scope: Deactivated successfully. Feb 13 07:19:47.548970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33-rootfs.mount: Deactivated successfully. Feb 13 07:19:47.567618 env[1473]: time="2024-02-13T07:19:47.567475259Z" level=info msg="shim disconnected" id=7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33 Feb 13 07:19:47.568117 env[1473]: time="2024-02-13T07:19:47.567632089Z" level=warning msg="cleaning up after shim disconnected" id=7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33 namespace=k8s.io Feb 13 07:19:47.568117 env[1473]: time="2024-02-13T07:19:47.567682168Z" level=info msg="cleaning up dead shim" Feb 13 07:19:47.583940 env[1473]: time="2024-02-13T07:19:47.583824882Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3742 runtime=io.containerd.runc.v2\n" Feb 13 07:19:47.584577 env[1473]: time="2024-02-13T07:19:47.584481962Z" level=info msg="TearDown network for sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" successfully" Feb 13 07:19:47.584752 env[1473]: time="2024-02-13T07:19:47.584537605Z" level=info msg="StopPodSandbox for \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" returns successfully" Feb 13 07:19:47.731035 kubelet[1848]: I0213 07:19:47.730918 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-config-path\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.731367 kubelet[1848]: I0213 07:19:47.731076 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k64l\" (UniqueName: \"kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-kube-api-access-4k64l\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.731367 kubelet[1848]: I0213 07:19:47.731179 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-bpf-maps\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.731367 kubelet[1848]: I0213 07:19:47.731275 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-xtables-lock\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.731367 kubelet[1848]: I0213 07:19:47.731266 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.731905 kubelet[1848]: I0213 07:19:47.731374 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cni-path\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.731905 kubelet[1848]: I0213 07:19:47.731361 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.731905 kubelet[1848]: I0213 07:19:47.731434 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cni-path" (OuterVolumeSpecName: "cni-path") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.731905 kubelet[1848]: I0213 07:19:47.731492 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hubble-tls\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.731905 kubelet[1848]: I0213 07:19:47.731627 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/987bdf09-d1e1-4223-93b7-ba2e9318f38f-clustermesh-secrets\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.731905 kubelet[1848]: W0213 07:19:47.731514 1848 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/987bdf09-d1e1-4223-93b7-ba2e9318f38f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 07:19:47.732603 kubelet[1848]: I0213 07:19:47.731737 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-kernel\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.732603 kubelet[1848]: I0213 07:19:47.731802 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.732603 kubelet[1848]: I0213 07:19:47.731850 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-etc-cni-netd\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.732603 kubelet[1848]: I0213 07:19:47.731949 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-lib-modules\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.732603 kubelet[1848]: I0213 07:19:47.732048 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-net\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.733147 kubelet[1848]: I0213 07:19:47.731944 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.733147 kubelet[1848]: I0213 07:19:47.732051 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.733147 kubelet[1848]: I0213 07:19:47.732155 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-run\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.733147 kubelet[1848]: I0213 07:19:47.732135 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.733147 kubelet[1848]: I0213 07:19:47.732251 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hostproc\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.733699 kubelet[1848]: I0213 07:19:47.732246 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.733699 kubelet[1848]: I0213 07:19:47.732348 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-cgroup\") pod \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\" (UID: \"987bdf09-d1e1-4223-93b7-ba2e9318f38f\") " Feb 13 07:19:47.733699 kubelet[1848]: I0213 07:19:47.732333 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hostproc" (OuterVolumeSpecName: "hostproc") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.733699 kubelet[1848]: I0213 07:19:47.732435 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:47.733699 kubelet[1848]: I0213 07:19:47.732475 1848 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-lib-modules\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.733699 kubelet[1848]: I0213 07:19:47.732547 1848 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-net\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.734306 kubelet[1848]: I0213 07:19:47.732637 1848 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-run\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.734306 kubelet[1848]: I0213 07:19:47.732694 1848 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cni-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.734306 kubelet[1848]: I0213 07:19:47.732748 1848 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-bpf-maps\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.734306 kubelet[1848]: I0213 07:19:47.732807 1848 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-xtables-lock\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.734306 kubelet[1848]: I0213 07:19:47.732862 1848 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-etc-cni-netd\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.734306 kubelet[1848]: I0213 07:19:47.732918 1848 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-host-proc-sys-kernel\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.737042 kubelet[1848]: I0213 07:19:47.736936 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:19:47.737660 kubelet[1848]: I0213 07:19:47.737633 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-kube-api-access-4k64l" (OuterVolumeSpecName: "kube-api-access-4k64l") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "kube-api-access-4k64l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:19:47.737660 kubelet[1848]: I0213 07:19:47.737638 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987bdf09-d1e1-4223-93b7-ba2e9318f38f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:19:47.737740 kubelet[1848]: I0213 07:19:47.737698 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "987bdf09-d1e1-4223-93b7-ba2e9318f38f" (UID: "987bdf09-d1e1-4223-93b7-ba2e9318f38f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:19:47.738316 systemd[1]: var-lib-kubelet-pods-987bdf09\x2dd1e1\x2d4223\x2d93b7\x2dba2e9318f38f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4k64l.mount: Deactivated successfully. Feb 13 07:19:47.738372 systemd[1]: var-lib-kubelet-pods-987bdf09\x2dd1e1\x2d4223\x2d93b7\x2dba2e9318f38f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:19:47.738410 systemd[1]: var-lib-kubelet-pods-987bdf09\x2dd1e1\x2d4223\x2d93b7\x2dba2e9318f38f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:19:47.833883 kubelet[1848]: I0213 07:19:47.833656 1848 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-4k64l\" (UniqueName: \"kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-kube-api-access-4k64l\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.833883 kubelet[1848]: I0213 07:19:47.833749 1848 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-config-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.833883 kubelet[1848]: I0213 07:19:47.833805 1848 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hubble-tls\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.833883 kubelet[1848]: I0213 07:19:47.833858 1848 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/987bdf09-d1e1-4223-93b7-ba2e9318f38f-clustermesh-secrets\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.834519 kubelet[1848]: I0213 07:19:47.833920 1848 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-hostproc\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.834519 kubelet[1848]: I0213 07:19:47.833972 1848 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/987bdf09-d1e1-4223-93b7-ba2e9318f38f-cilium-cgroup\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:47.906788 kubelet[1848]: E0213 07:19:47.906709 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:48.499183 kubelet[1848]: I0213 07:19:48.499121 1848 scope.go:115] "RemoveContainer" containerID="b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4" Feb 13 07:19:48.502244 env[1473]: time="2024-02-13T07:19:48.502127624Z" level=info msg="RemoveContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\"" Feb 13 07:19:48.504666 env[1473]: time="2024-02-13T07:19:48.504633265Z" level=info msg="RemoveContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" returns successfully" Feb 13 07:19:48.504805 kubelet[1848]: I0213 07:19:48.504798 1848 scope.go:115] "RemoveContainer" containerID="20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4" Feb 13 07:19:48.505450 systemd[1]: Removed slice kubepods-burstable-pod987bdf09_d1e1_4223_93b7_ba2e9318f38f.slice. Feb 13 07:19:48.505498 systemd[1]: kubepods-burstable-pod987bdf09_d1e1_4223_93b7_ba2e9318f38f.slice: Consumed 5.652s CPU time. Feb 13 07:19:48.505543 env[1473]: time="2024-02-13T07:19:48.505451733Z" level=info msg="RemoveContainer for \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\"" Feb 13 07:19:48.524563 env[1473]: time="2024-02-13T07:19:48.524505690Z" level=info msg="RemoveContainer for \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\" returns successfully" Feb 13 07:19:48.524801 kubelet[1848]: I0213 07:19:48.524691 1848 scope.go:115] "RemoveContainer" containerID="ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98" Feb 13 07:19:48.525432 env[1473]: time="2024-02-13T07:19:48.525415606Z" level=info msg="RemoveContainer for \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\"" Feb 13 07:19:48.526979 env[1473]: time="2024-02-13T07:19:48.526951329Z" level=info msg="RemoveContainer for \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\" returns successfully" Feb 13 07:19:48.527272 kubelet[1848]: I0213 07:19:48.527261 1848 scope.go:115] "RemoveContainer" containerID="059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa" Feb 13 07:19:48.527914 env[1473]: time="2024-02-13T07:19:48.527897614Z" level=info msg="RemoveContainer for \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\"" Feb 13 07:19:48.528995 env[1473]: time="2024-02-13T07:19:48.528979889Z" level=info msg="RemoveContainer for \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\" returns successfully" Feb 13 07:19:48.529057 kubelet[1848]: I0213 07:19:48.529048 1848 scope.go:115] "RemoveContainer" containerID="81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30" Feb 13 07:19:48.529576 env[1473]: time="2024-02-13T07:19:48.529561924Z" level=info msg="RemoveContainer for \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\"" Feb 13 07:19:48.530707 env[1473]: time="2024-02-13T07:19:48.530689079Z" level=info msg="RemoveContainer for \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\" returns successfully" Feb 13 07:19:48.530770 kubelet[1848]: I0213 07:19:48.530759 1848 scope.go:115] "RemoveContainer" containerID="b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4" Feb 13 07:19:48.530956 env[1473]: time="2024-02-13T07:19:48.530905618Z" level=error msg="ContainerStatus for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\": not found" Feb 13 07:19:48.531022 kubelet[1848]: E0213 07:19:48.531013 1848 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\": not found" containerID="b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4" Feb 13 07:19:48.531057 kubelet[1848]: I0213 07:19:48.531036 1848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4} err="failed to get container status \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\": not found" Feb 13 07:19:48.531057 kubelet[1848]: I0213 07:19:48.531045 1848 scope.go:115] "RemoveContainer" containerID="20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4" Feb 13 07:19:48.531164 env[1473]: time="2024-02-13T07:19:48.531131949Z" level=error msg="ContainerStatus for \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\": not found" Feb 13 07:19:48.531221 kubelet[1848]: E0213 07:19:48.531212 1848 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\": not found" containerID="20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4" Feb 13 07:19:48.531260 kubelet[1848]: I0213 07:19:48.531231 1848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4} err="failed to get container status \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"20a09df4cce50e8e4505e699ed7672fa81eeca3432db55883527e4bb75b5b4e4\": not found" Feb 13 07:19:48.531260 kubelet[1848]: I0213 07:19:48.531238 1848 scope.go:115] "RemoveContainer" containerID="ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98" Feb 13 07:19:48.531365 env[1473]: time="2024-02-13T07:19:48.531328558Z" level=error msg="ContainerStatus for \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\": not found" Feb 13 07:19:48.531421 kubelet[1848]: E0213 07:19:48.531416 1848 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\": not found" containerID="ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98" Feb 13 07:19:48.531450 kubelet[1848]: I0213 07:19:48.531430 1848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98} err="failed to get container status \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed2f01373a62785cd45dabdc002080de245795d9c1a5207b571c293962d23d98\": not found" Feb 13 07:19:48.531450 kubelet[1848]: I0213 07:19:48.531436 1848 scope.go:115] "RemoveContainer" containerID="059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa" Feb 13 07:19:48.531569 env[1473]: time="2024-02-13T07:19:48.531530401Z" level=error msg="ContainerStatus for \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\": not found" Feb 13 07:19:48.531626 kubelet[1848]: E0213 07:19:48.531619 1848 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\": not found" containerID="059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa" Feb 13 07:19:48.531660 kubelet[1848]: I0213 07:19:48.531639 1848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa} err="failed to get container status \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\": rpc error: code = NotFound desc = an error occurred when try to find container \"059fe73cd098b89102f79a6b48920e0cd06dd2cb9e454da4f64af1617f765dfa\": not found" Feb 13 07:19:48.531660 kubelet[1848]: I0213 07:19:48.531647 1848 scope.go:115] "RemoveContainer" containerID="81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30" Feb 13 07:19:48.531808 env[1473]: time="2024-02-13T07:19:48.531775432Z" level=error msg="ContainerStatus for \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\": not found" Feb 13 07:19:48.531868 kubelet[1848]: E0213 07:19:48.531862 1848 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\": not found" containerID="81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30" Feb 13 07:19:48.531899 kubelet[1848]: I0213 07:19:48.531878 1848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30} err="failed to get container status \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\": rpc error: code = NotFound desc = an error occurred when try to find container \"81b0573d628d4fc5b07b8d7c01a582d5aaee2bca13705cc1f90cf5eebf52ce30\": not found" Feb 13 07:19:48.736151 sshd[3666]: Received disconnect from 43.156.7.94 port 33222:11: Bye Bye [preauth] Feb 13 07:19:48.736151 sshd[3666]: Disconnected from authenticating user root 43.156.7.94 port 33222 [preauth] Feb 13 07:19:48.738771 systemd[1]: sshd@21-139.178.90.101:22-43.156.7.94:33222.service: Deactivated successfully. Feb 13 07:19:48.907642 kubelet[1848]: E0213 07:19:48.907391 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:48.949795 env[1473]: time="2024-02-13T07:19:48.949693239Z" level=info msg="StopContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" with timeout 1 (s)" Feb 13 07:19:48.950068 env[1473]: time="2024-02-13T07:19:48.949828915Z" level=error msg="StopContainer for \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\": not found" Feb 13 07:19:48.950295 kubelet[1848]: E0213 07:19:48.950254 1848 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4\": not found" containerID="b20b02d83e8a40b1b1f408cdd48822dd012bc46cf36e81e23cb1ff3e171b6af4" Feb 13 07:19:48.950803 env[1473]: time="2024-02-13T07:19:48.950737200Z" level=info msg="StopPodSandbox for \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\"" Feb 13 07:19:48.951017 env[1473]: time="2024-02-13T07:19:48.950927884Z" level=info msg="TearDown network for sandbox \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" successfully" Feb 13 07:19:48.951157 env[1473]: time="2024-02-13T07:19:48.951021481Z" level=info msg="StopPodSandbox for \"7d3d36e998c6ca3fce68fdd7495c0d231c66dd5e5362262ed0e983c8725d3a33\" returns successfully" Feb 13 07:19:48.951785 kubelet[1848]: I0213 07:19:48.951743 1848 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=987bdf09-d1e1-4223-93b7-ba2e9318f38f path="/var/lib/kubelet/pods/987bdf09-d1e1-4223-93b7-ba2e9318f38f/volumes" Feb 13 07:19:49.079800 kubelet[1848]: I0213 07:19:49.079732 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:19:49.080065 kubelet[1848]: E0213 07:19:49.079829 1848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="987bdf09-d1e1-4223-93b7-ba2e9318f38f" containerName="mount-cgroup" Feb 13 07:19:49.080065 kubelet[1848]: E0213 07:19:49.079860 1848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="987bdf09-d1e1-4223-93b7-ba2e9318f38f" containerName="apply-sysctl-overwrites" Feb 13 07:19:49.080065 kubelet[1848]: E0213 07:19:49.079879 1848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="987bdf09-d1e1-4223-93b7-ba2e9318f38f" containerName="clean-cilium-state" Feb 13 07:19:49.080065 kubelet[1848]: E0213 07:19:49.079898 1848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="987bdf09-d1e1-4223-93b7-ba2e9318f38f" containerName="cilium-agent" Feb 13 07:19:49.080065 kubelet[1848]: E0213 07:19:49.079917 1848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="987bdf09-d1e1-4223-93b7-ba2e9318f38f" containerName="mount-bpf-fs" Feb 13 07:19:49.080065 kubelet[1848]: I0213 07:19:49.079964 1848 memory_manager.go:346] "RemoveStaleState removing state" podUID="987bdf09-d1e1-4223-93b7-ba2e9318f38f" containerName="cilium-agent" Feb 13 07:19:49.086374 kubelet[1848]: I0213 07:19:49.086290 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:19:49.095577 systemd[1]: Created slice kubepods-besteffort-podbd0dd683_1e9a_4a24_a9b5_2c5cebe35526.slice. Feb 13 07:19:49.107224 systemd[1]: Created slice kubepods-burstable-pod73087f10_23c2_48f9_a8ad_6eb4d85ba6c7.slice. Feb 13 07:19:49.244132 kubelet[1848]: I0213 07:19:49.244026 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-run\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244132 kubelet[1848]: I0213 07:19:49.244127 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-cgroup\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244524 kubelet[1848]: I0213 07:19:49.244256 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-kernel\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244524 kubelet[1848]: I0213 07:19:49.244360 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-xtables-lock\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244524 kubelet[1848]: I0213 07:19:49.244463 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjl6\" (UniqueName: \"kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-kube-api-access-5xjl6\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244930 kubelet[1848]: I0213 07:19:49.244534 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hostproc\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244930 kubelet[1848]: I0213 07:19:49.244617 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cni-path\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244930 kubelet[1848]: I0213 07:19:49.244677 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-etc-cni-netd\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.244930 kubelet[1848]: I0213 07:19:49.244848 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd0dd683-1e9a-4a24-a9b5-2c5cebe35526-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-bpt88\" (UID: \"bd0dd683-1e9a-4a24-a9b5-2c5cebe35526\") " pod="kube-system/cilium-operator-f59cbd8c6-bpt88" Feb 13 07:19:49.245335 kubelet[1848]: I0213 07:19:49.244953 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hubble-tls\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.245335 kubelet[1848]: I0213 07:19:49.245089 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrgs\" (UniqueName: \"kubernetes.io/projected/bd0dd683-1e9a-4a24-a9b5-2c5cebe35526-kube-api-access-rmrgs\") pod \"cilium-operator-f59cbd8c6-bpt88\" (UID: \"bd0dd683-1e9a-4a24-a9b5-2c5cebe35526\") " pod="kube-system/cilium-operator-f59cbd8c6-bpt88" Feb 13 07:19:49.245335 kubelet[1848]: I0213 07:19:49.245182 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-bpf-maps\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.245335 kubelet[1848]: I0213 07:19:49.245274 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-lib-modules\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.245767 kubelet[1848]: I0213 07:19:49.245374 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-clustermesh-secrets\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.245767 kubelet[1848]: I0213 07:19:49.245443 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-config-path\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.245767 kubelet[1848]: I0213 07:19:49.245532 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-ipsec-secrets\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.245767 kubelet[1848]: I0213 07:19:49.245621 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-net\") pod \"cilium-x6bq7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " pod="kube-system/cilium-x6bq7" Feb 13 07:19:49.402564 env[1473]: time="2024-02-13T07:19:49.402521355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bpt88,Uid:bd0dd683-1e9a-4a24-a9b5-2c5cebe35526,Namespace:kube-system,Attempt:0,}" Feb 13 07:19:49.410917 env[1473]: time="2024-02-13T07:19:49.410790014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:19:49.410917 env[1473]: time="2024-02-13T07:19:49.410831673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:19:49.410917 env[1473]: time="2024-02-13T07:19:49.410844333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:19:49.411134 env[1473]: time="2024-02-13T07:19:49.411043415Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b87d2c708593a344f6d494ec4905e5bc1606611d93164a682b11ae10594c23f pid=3770 runtime=io.containerd.runc.v2 Feb 13 07:19:49.423039 systemd[1]: Started cri-containerd-7b87d2c708593a344f6d494ec4905e5bc1606611d93164a682b11ae10594c23f.scope. Feb 13 07:19:49.431419 env[1473]: time="2024-02-13T07:19:49.431374853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6bq7,Uid:73087f10-23c2-48f9-a8ad-6eb4d85ba6c7,Namespace:kube-system,Attempt:0,}" Feb 13 07:19:49.441688 env[1473]: time="2024-02-13T07:19:49.441612305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:19:49.441688 env[1473]: time="2024-02-13T07:19:49.441670572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:19:49.441866 env[1473]: time="2024-02-13T07:19:49.441699319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:19:49.441916 env[1473]: time="2024-02-13T07:19:49.441865126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907 pid=3804 runtime=io.containerd.runc.v2 Feb 13 07:19:49.452845 systemd[1]: Started cri-containerd-36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907.scope. Feb 13 07:19:49.465804 env[1473]: time="2024-02-13T07:19:49.465776430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bpt88,Uid:bd0dd683-1e9a-4a24-a9b5-2c5cebe35526,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b87d2c708593a344f6d494ec4905e5bc1606611d93164a682b11ae10594c23f\"" Feb 13 07:19:49.466480 env[1473]: time="2024-02-13T07:19:49.466448040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6bq7,Uid:73087f10-23c2-48f9-a8ad-6eb4d85ba6c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907\"" Feb 13 07:19:49.466675 env[1473]: time="2024-02-13T07:19:49.466631628Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 07:19:49.467634 env[1473]: time="2024-02-13T07:19:49.467588494Z" level=info msg="CreateContainer within sandbox \"36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:19:49.496062 env[1473]: time="2024-02-13T07:19:49.495840712Z" level=info msg="CreateContainer within sandbox \"36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\"" Feb 13 07:19:49.496851 env[1473]: time="2024-02-13T07:19:49.496729335Z" level=info msg="StartContainer for \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\"" Feb 13 07:19:49.529710 systemd[1]: Started cri-containerd-53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e.scope. Feb 13 07:19:49.545953 systemd[1]: cri-containerd-53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e.scope: Deactivated successfully. Feb 13 07:19:49.561425 env[1473]: time="2024-02-13T07:19:49.561315897Z" level=info msg="shim disconnected" id=53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e Feb 13 07:19:49.561425 env[1473]: time="2024-02-13T07:19:49.561415725Z" level=warning msg="cleaning up after shim disconnected" id=53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e namespace=k8s.io Feb 13 07:19:49.562180 env[1473]: time="2024-02-13T07:19:49.561438451Z" level=info msg="cleaning up dead shim" Feb 13 07:19:49.573420 env[1473]: time="2024-02-13T07:19:49.573319740Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3867 runtime=io.containerd.runc.v2\ntime=\"2024-02-13T07:19:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 13 07:19:49.573953 env[1473]: time="2024-02-13T07:19:49.573725337Z" level=error msg="copy shim log" error="read /proc/self/fd/87: file already closed" Feb 13 07:19:49.574225 env[1473]: time="2024-02-13T07:19:49.574115110Z" level=error msg="Failed to pipe stdout of container \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\"" error="reading from a closed fifo" Feb 13 07:19:49.574225 env[1473]: time="2024-02-13T07:19:49.574136455Z" level=error msg="Failed to pipe stderr of container \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\"" error="reading from a closed fifo" Feb 13 07:19:49.575470 env[1473]: time="2024-02-13T07:19:49.575346179Z" level=error msg="StartContainer for \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 13 07:19:49.575740 kubelet[1848]: E0213 07:19:49.575666 1848 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e" Feb 13 07:19:49.575935 kubelet[1848]: E0213 07:19:49.575848 1848 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 07:19:49.575935 kubelet[1848]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 07:19:49.575935 kubelet[1848]: rm /hostbin/cilium-mount Feb 13 07:19:49.575935 kubelet[1848]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5xjl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x6bq7_kube-system(73087f10-23c2-48f9-a8ad-6eb4d85ba6c7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 13 07:19:49.576367 kubelet[1848]: E0213 07:19:49.575919 1848 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x6bq7" podUID=73087f10-23c2-48f9-a8ad-6eb4d85ba6c7 Feb 13 07:19:49.863833 kubelet[1848]: E0213 07:19:49.863635 1848 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 07:19:49.908599 kubelet[1848]: E0213 07:19:49.908491 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:50.515695 env[1473]: time="2024-02-13T07:19:50.515595855Z" level=info msg="StopPodSandbox for \"36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907\"" Feb 13 07:19:50.516056 env[1473]: time="2024-02-13T07:19:50.515748040Z" level=info msg="Container to stop \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:19:50.522408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907-shm.mount: Deactivated successfully. Feb 13 07:19:50.524918 systemd[1]: cri-containerd-36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907.scope: Deactivated successfully. Feb 13 07:19:50.535781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907-rootfs.mount: Deactivated successfully. Feb 13 07:19:50.553114 env[1473]: time="2024-02-13T07:19:50.553066896Z" level=info msg="shim disconnected" id=36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907 Feb 13 07:19:50.553114 env[1473]: time="2024-02-13T07:19:50.553113536Z" level=warning msg="cleaning up after shim disconnected" id=36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907 namespace=k8s.io Feb 13 07:19:50.553294 env[1473]: time="2024-02-13T07:19:50.553124508Z" level=info msg="cleaning up dead shim" Feb 13 07:19:50.557770 env[1473]: time="2024-02-13T07:19:50.557740660Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3896 runtime=io.containerd.runc.v2\n" Feb 13 07:19:50.557999 env[1473]: time="2024-02-13T07:19:50.557957395Z" level=info msg="TearDown network for sandbox \"36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907\" successfully" Feb 13 07:19:50.557999 env[1473]: time="2024-02-13T07:19:50.557974288Z" level=info msg="StopPodSandbox for \"36f3d39c9f2dcb1e49bfa6d29e14983eefcff1586bc2f667b108b250c892f907\" returns successfully" Feb 13 07:19:50.657043 kubelet[1848]: I0213 07:19:50.656969 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-config-path\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.657505 kubelet[1848]: I0213 07:19:50.657075 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-etc-cni-netd\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.657505 kubelet[1848]: I0213 07:19:50.657134 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-cgroup\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.657505 kubelet[1848]: I0213 07:19:50.657192 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-xtables-lock\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.657505 kubelet[1848]: I0213 07:19:50.657222 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.657505 kubelet[1848]: I0213 07:19:50.657263 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hubble-tls\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.657505 kubelet[1848]: I0213 07:19:50.657295 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.658715 kubelet[1848]: I0213 07:19:50.657420 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-clustermesh-secrets\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.658715 kubelet[1848]: I0213 07:19:50.657378 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.658715 kubelet[1848]: I0213 07:19:50.657517 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-ipsec-secrets\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.658715 kubelet[1848]: I0213 07:19:50.657621 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-run\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.658715 kubelet[1848]: W0213 07:19:50.657680 1848 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 07:19:50.658715 kubelet[1848]: I0213 07:19:50.657686 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.659814 kubelet[1848]: I0213 07:19:50.657707 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-kernel\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.659814 kubelet[1848]: I0213 07:19:50.657769 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.659814 kubelet[1848]: I0213 07:19:50.657824 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cni-path\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.659814 kubelet[1848]: I0213 07:19:50.657886 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-bpf-maps\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.659814 kubelet[1848]: I0213 07:19:50.657942 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-lib-modules\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.659814 kubelet[1848]: I0213 07:19:50.657999 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-net\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.660950 kubelet[1848]: I0213 07:19:50.657942 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.660950 kubelet[1848]: I0213 07:19:50.658071 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xjl6\" (UniqueName: \"kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-kube-api-access-5xjl6\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.660950 kubelet[1848]: I0213 07:19:50.658070 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.660950 kubelet[1848]: I0213 07:19:50.658014 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.660950 kubelet[1848]: I0213 07:19:50.658126 1848 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hostproc\") pod \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\" (UID: \"73087f10-23c2-48f9-a8ad-6eb4d85ba6c7\") " Feb 13 07:19:50.661876 kubelet[1848]: I0213 07:19:50.658104 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.661876 kubelet[1848]: I0213 07:19:50.658227 1848 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-etc-cni-netd\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.661876 kubelet[1848]: I0213 07:19:50.658215 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:19:50.661876 kubelet[1848]: I0213 07:19:50.658264 1848 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-run\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.661876 kubelet[1848]: I0213 07:19:50.658295 1848 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-cgroup\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.661876 kubelet[1848]: I0213 07:19:50.658324 1848 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-xtables-lock\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.661876 kubelet[1848]: I0213 07:19:50.658353 1848 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-bpf-maps\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.663088 kubelet[1848]: I0213 07:19:50.658381 1848 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-lib-modules\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.663088 kubelet[1848]: I0213 07:19:50.658412 1848 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-net\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.663088 kubelet[1848]: I0213 07:19:50.658445 1848 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-host-proc-sys-kernel\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.663088 kubelet[1848]: I0213 07:19:50.658473 1848 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cni-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.663341 kubelet[1848]: I0213 07:19:50.663327 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:19:50.663341 kubelet[1848]: I0213 07:19:50.663329 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:19:50.663428 kubelet[1848]: I0213 07:19:50.663389 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:19:50.663495 kubelet[1848]: I0213 07:19:50.663483 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-kube-api-access-5xjl6" (OuterVolumeSpecName: "kube-api-access-5xjl6") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "kube-api-access-5xjl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:19:50.663644 kubelet[1848]: I0213 07:19:50.663632 1848 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" (UID: "73087f10-23c2-48f9-a8ad-6eb4d85ba6c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:19:50.664120 systemd[1]: var-lib-kubelet-pods-73087f10\x2d23c2\x2d48f9\x2da8ad\x2d6eb4d85ba6c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xjl6.mount: Deactivated successfully. Feb 13 07:19:50.664176 systemd[1]: var-lib-kubelet-pods-73087f10\x2d23c2\x2d48f9\x2da8ad\x2d6eb4d85ba6c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:19:50.664212 systemd[1]: var-lib-kubelet-pods-73087f10\x2d23c2\x2d48f9\x2da8ad\x2d6eb4d85ba6c7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 07:19:50.664244 systemd[1]: var-lib-kubelet-pods-73087f10\x2d23c2\x2d48f9\x2da8ad\x2d6eb4d85ba6c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:19:50.759586 kubelet[1848]: I0213 07:19:50.759513 1848 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-5xjl6\" (UniqueName: \"kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-kube-api-access-5xjl6\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.759586 kubelet[1848]: I0213 07:19:50.759601 1848 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hostproc\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.760023 kubelet[1848]: I0213 07:19:50.759637 1848 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-config-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.760023 kubelet[1848]: I0213 07:19:50.759666 1848 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-hubble-tls\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.760023 kubelet[1848]: I0213 07:19:50.759695 1848 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-clustermesh-secrets\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.760023 kubelet[1848]: I0213 07:19:50.759725 1848 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7-cilium-ipsec-secrets\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:19:50.908997 kubelet[1848]: E0213 07:19:50.908903 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:50.959413 systemd[1]: Removed slice kubepods-burstable-pod73087f10_23c2_48f9_a8ad_6eb4d85ba6c7.slice. Feb 13 07:19:51.513001 env[1473]: time="2024-02-13T07:19:51.512975154Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:51.513651 env[1473]: time="2024-02-13T07:19:51.513640604Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:51.514309 env[1473]: time="2024-02-13T07:19:51.514297791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:19:51.514922 env[1473]: time="2024-02-13T07:19:51.514906750Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 07:19:51.516038 env[1473]: time="2024-02-13T07:19:51.515999242Z" level=info msg="CreateContainer within sandbox \"7b87d2c708593a344f6d494ec4905e5bc1606611d93164a682b11ae10594c23f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 07:19:51.516571 kubelet[1848]: I0213 07:19:51.516541 1848 scope.go:115] "RemoveContainer" containerID="53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e" Feb 13 07:19:51.517242 env[1473]: time="2024-02-13T07:19:51.517226361Z" level=info msg="RemoveContainer for \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\"" Feb 13 07:19:51.518617 env[1473]: time="2024-02-13T07:19:51.518589941Z" level=info msg="RemoveContainer for \"53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e\" returns successfully" Feb 13 07:19:51.521546 env[1473]: time="2024-02-13T07:19:51.521500981Z" level=info msg="CreateContainer within sandbox \"7b87d2c708593a344f6d494ec4905e5bc1606611d93164a682b11ae10594c23f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c41cc78f1a4f0d7e6e71423d957fa854f8a9d125e7b3f3e3cfc31d08e44d3961\"" Feb 13 07:19:51.521859 env[1473]: time="2024-02-13T07:19:51.521800277Z" level=info msg="StartContainer for \"c41cc78f1a4f0d7e6e71423d957fa854f8a9d125e7b3f3e3cfc31d08e44d3961\"" Feb 13 07:19:51.530368 systemd[1]: Started cri-containerd-c41cc78f1a4f0d7e6e71423d957fa854f8a9d125e7b3f3e3cfc31d08e44d3961.scope. Feb 13 07:19:51.540221 kubelet[1848]: I0213 07:19:51.540203 1848 topology_manager.go:210] "Topology Admit Handler" Feb 13 07:19:51.540300 kubelet[1848]: E0213 07:19:51.540232 1848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" containerName="mount-cgroup" Feb 13 07:19:51.540300 kubelet[1848]: I0213 07:19:51.540248 1848 memory_manager.go:346] "RemoveStaleState removing state" podUID="73087f10-23c2-48f9-a8ad-6eb4d85ba6c7" containerName="mount-cgroup" Feb 13 07:19:51.542847 env[1473]: time="2024-02-13T07:19:51.542780775Z" level=info msg="StartContainer for \"c41cc78f1a4f0d7e6e71423d957fa854f8a9d125e7b3f3e3cfc31d08e44d3961\" returns successfully" Feb 13 07:19:51.543082 systemd[1]: Created slice kubepods-burstable-pod59e46992_5c1c_4fa9_9202_cf33abc35aa7.slice. Feb 13 07:19:51.665092 kubelet[1848]: I0213 07:19:51.664979 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59e46992-5c1c-4fa9-9202-cf33abc35aa7-cilium-config-path\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.665092 kubelet[1848]: I0213 07:19:51.665087 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-cilium-cgroup\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.665521 kubelet[1848]: I0213 07:19:51.665262 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-etc-cni-netd\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.665521 kubelet[1848]: I0213 07:19:51.665403 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59e46992-5c1c-4fa9-9202-cf33abc35aa7-clustermesh-secrets\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.665806 kubelet[1848]: I0213 07:19:51.665599 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-host-proc-sys-kernel\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.665806 kubelet[1848]: I0213 07:19:51.665709 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-hostproc\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.665806 kubelet[1848]: I0213 07:19:51.665790 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-cni-path\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666129 kubelet[1848]: I0213 07:19:51.665881 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-xtables-lock\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666129 kubelet[1848]: I0213 07:19:51.666041 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/59e46992-5c1c-4fa9-9202-cf33abc35aa7-cilium-ipsec-secrets\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666357 kubelet[1848]: I0213 07:19:51.666144 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59e46992-5c1c-4fa9-9202-cf33abc35aa7-hubble-tls\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666357 kubelet[1848]: I0213 07:19:51.666240 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgcvl\" (UniqueName: \"kubernetes.io/projected/59e46992-5c1c-4fa9-9202-cf33abc35aa7-kube-api-access-bgcvl\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666357 kubelet[1848]: I0213 07:19:51.666335 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-cilium-run\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666710 kubelet[1848]: I0213 07:19:51.666424 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-bpf-maps\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666710 kubelet[1848]: I0213 07:19:51.666500 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-lib-modules\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.666710 kubelet[1848]: I0213 07:19:51.666613 1848 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59e46992-5c1c-4fa9-9202-cf33abc35aa7-host-proc-sys-net\") pod \"cilium-shvqv\" (UID: \"59e46992-5c1c-4fa9-9202-cf33abc35aa7\") " pod="kube-system/cilium-shvqv" Feb 13 07:19:51.857676 env[1473]: time="2024-02-13T07:19:51.857406112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shvqv,Uid:59e46992-5c1c-4fa9-9202-cf33abc35aa7,Namespace:kube-system,Attempt:0,}" Feb 13 07:19:51.873424 env[1473]: time="2024-02-13T07:19:51.873366915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:19:51.873424 env[1473]: time="2024-02-13T07:19:51.873385505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:19:51.873424 env[1473]: time="2024-02-13T07:19:51.873414168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:19:51.873512 env[1473]: time="2024-02-13T07:19:51.873468554Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e pid=3974 runtime=io.containerd.runc.v2 Feb 13 07:19:51.879489 systemd[1]: Started cri-containerd-ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e.scope. Feb 13 07:19:51.890041 env[1473]: time="2024-02-13T07:19:51.890007319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shvqv,Uid:59e46992-5c1c-4fa9-9202-cf33abc35aa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\"" Feb 13 07:19:51.891320 env[1473]: time="2024-02-13T07:19:51.891301911Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:19:51.896721 env[1473]: time="2024-02-13T07:19:51.896674720Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae\"" Feb 13 07:19:51.896942 env[1473]: time="2024-02-13T07:19:51.896922824Z" level=info msg="StartContainer for \"6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae\"" Feb 13 07:19:51.906646 systemd[1]: Started cri-containerd-6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae.scope. Feb 13 07:19:51.909352 kubelet[1848]: E0213 07:19:51.909333 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:51.922963 env[1473]: time="2024-02-13T07:19:51.922925103Z" level=info msg="StartContainer for \"6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae\" returns successfully" Feb 13 07:19:51.930744 systemd[1]: cri-containerd-6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae.scope: Deactivated successfully. Feb 13 07:19:52.117476 env[1473]: time="2024-02-13T07:19:52.117249000Z" level=info msg="shim disconnected" id=6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae Feb 13 07:19:52.117476 env[1473]: time="2024-02-13T07:19:52.117354665Z" level=warning msg="cleaning up after shim disconnected" id=6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae namespace=k8s.io Feb 13 07:19:52.117476 env[1473]: time="2024-02-13T07:19:52.117384439Z" level=info msg="cleaning up dead shim" Feb 13 07:19:52.126064 env[1473]: time="2024-02-13T07:19:52.125993340Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4055 runtime=io.containerd.runc.v2\n" Feb 13 07:19:52.531148 env[1473]: time="2024-02-13T07:19:52.531058572Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:19:52.532232 kubelet[1848]: I0213 07:19:52.532186 1848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-bpt88" podStartSLOduration=-9.223372033322674e+09 pod.CreationTimestamp="2024-02-13 07:19:49 +0000 UTC" firstStartedPulling="2024-02-13 07:19:49.466429972 +0000 UTC m=+204.913544321" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:19:52.531318947 +0000 UTC m=+207.978433357" watchObservedRunningTime="2024-02-13 07:19:52.532102011 +0000 UTC m=+207.979216406" Feb 13 07:19:52.548077 env[1473]: time="2024-02-13T07:19:52.547967522Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c\"" Feb 13 07:19:52.548858 env[1473]: time="2024-02-13T07:19:52.548793457Z" level=info msg="StartContainer for \"c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c\"" Feb 13 07:19:52.587135 systemd[1]: Started cri-containerd-c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c.scope. Feb 13 07:19:52.622385 env[1473]: time="2024-02-13T07:19:52.622321212Z" level=info msg="StartContainer for \"c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c\" returns successfully" Feb 13 07:19:52.635264 systemd[1]: cri-containerd-c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c.scope: Deactivated successfully. Feb 13 07:19:52.663840 env[1473]: time="2024-02-13T07:19:52.663762694Z" level=info msg="shim disconnected" id=c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c Feb 13 07:19:52.664150 env[1473]: time="2024-02-13T07:19:52.663840503Z" level=warning msg="cleaning up after shim disconnected" id=c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c namespace=k8s.io Feb 13 07:19:52.664150 env[1473]: time="2024-02-13T07:19:52.663871967Z" level=info msg="cleaning up dead shim" Feb 13 07:19:52.667711 kubelet[1848]: W0213 07:19:52.667659 1848 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73087f10_23c2_48f9_a8ad_6eb4d85ba6c7.slice/cri-containerd-53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e.scope WatchSource:0}: container "53ed3750e075bd8fee8fc2c15a02559b028d69e92b5de26af3c2df8e094d810e" in namespace "k8s.io": not found Feb 13 07:19:52.676995 env[1473]: time="2024-02-13T07:19:52.676926898Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4116 runtime=io.containerd.runc.v2\n" Feb 13 07:19:52.893510 update_engine[1464]: I0213 07:19:52.893281 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:19:52.894287 update_engine[1464]: I0213 07:19:52.893847 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:19:52.894287 update_engine[1464]: E0213 07:19:52.894107 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:19:52.894535 update_engine[1464]: I0213 07:19:52.894353 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 07:19:52.910101 kubelet[1848]: E0213 07:19:52.909987 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:52.945432 systemd[1]: Started sshd@23-139.178.90.101:22-46.101.146.252:34500.service. Feb 13 07:19:52.947227 kubelet[1848]: I0213 07:19:52.947217 1848 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=73087f10-23c2-48f9-a8ad-6eb4d85ba6c7 path="/var/lib/kubelet/pods/73087f10-23c2-48f9-a8ad-6eb4d85ba6c7/volumes" Feb 13 07:19:53.354569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c-rootfs.mount: Deactivated successfully. Feb 13 07:19:53.539762 env[1473]: time="2024-02-13T07:19:53.539670334Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:19:53.576211 env[1473]: time="2024-02-13T07:19:53.576158627Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0\"" Feb 13 07:19:53.576582 env[1473]: time="2024-02-13T07:19:53.576557797Z" level=info msg="StartContainer for \"1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0\"" Feb 13 07:19:53.585960 systemd[1]: Started cri-containerd-1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0.scope. Feb 13 07:19:53.597991 env[1473]: time="2024-02-13T07:19:53.597937500Z" level=info msg="StartContainer for \"1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0\" returns successfully" Feb 13 07:19:53.599156 systemd[1]: cri-containerd-1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0.scope: Deactivated successfully. Feb 13 07:19:53.610543 env[1473]: time="2024-02-13T07:19:53.610485611Z" level=info msg="shim disconnected" id=1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0 Feb 13 07:19:53.610543 env[1473]: time="2024-02-13T07:19:53.610516917Z" level=warning msg="cleaning up after shim disconnected" id=1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0 namespace=k8s.io Feb 13 07:19:53.610543 env[1473]: time="2024-02-13T07:19:53.610524674Z" level=info msg="cleaning up dead shim" Feb 13 07:19:53.614560 env[1473]: time="2024-02-13T07:19:53.614512162Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4175 runtime=io.containerd.runc.v2\n" Feb 13 07:19:53.821279 sshd[4129]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.101.146.252 user=root Feb 13 07:19:53.910846 kubelet[1848]: E0213 07:19:53.910648 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:54.357014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0-rootfs.mount: Deactivated successfully. Feb 13 07:19:54.546363 env[1473]: time="2024-02-13T07:19:54.546264425Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:19:54.562096 env[1473]: time="2024-02-13T07:19:54.561981338Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255\"" Feb 13 07:19:54.562968 env[1473]: time="2024-02-13T07:19:54.562880518Z" level=info msg="StartContainer for \"1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255\"" Feb 13 07:19:54.588488 systemd[1]: Started cri-containerd-1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255.scope. Feb 13 07:19:54.599208 env[1473]: time="2024-02-13T07:19:54.599185253Z" level=info msg="StartContainer for \"1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255\" returns successfully" Feb 13 07:19:54.599449 systemd[1]: cri-containerd-1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255.scope: Deactivated successfully. Feb 13 07:19:54.609124 env[1473]: time="2024-02-13T07:19:54.609026326Z" level=info msg="shim disconnected" id=1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255 Feb 13 07:19:54.609124 env[1473]: time="2024-02-13T07:19:54.609054015Z" level=warning msg="cleaning up after shim disconnected" id=1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255 namespace=k8s.io Feb 13 07:19:54.609124 env[1473]: time="2024-02-13T07:19:54.609059951Z" level=info msg="cleaning up dead shim" Feb 13 07:19:54.612390 env[1473]: time="2024-02-13T07:19:54.612373644Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4228 runtime=io.containerd.runc.v2\n" Feb 13 07:19:54.865772 kubelet[1848]: E0213 07:19:54.865546 1848 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 07:19:54.911726 kubelet[1848]: E0213 07:19:54.911615 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:55.356854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255-rootfs.mount: Deactivated successfully. Feb 13 07:19:55.556043 env[1473]: time="2024-02-13T07:19:55.555947567Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:19:55.575478 env[1473]: time="2024-02-13T07:19:55.575358474Z" level=info msg="CreateContainer within sandbox \"ff3bdf0746c15f81323d87aa4d293001f0bb89722195e66c8001b8da19b0fb4e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4b267e43f449e59c58ca52994a51c71f0f565f961a988051bc3346e21b6c0c7\"" Feb 13 07:19:55.576360 env[1473]: time="2024-02-13T07:19:55.576247347Z" level=info msg="StartContainer for \"a4b267e43f449e59c58ca52994a51c71f0f565f961a988051bc3346e21b6c0c7\"" Feb 13 07:19:55.615893 systemd[1]: Started cri-containerd-a4b267e43f449e59c58ca52994a51c71f0f565f961a988051bc3346e21b6c0c7.scope. Feb 13 07:19:55.647822 env[1473]: time="2024-02-13T07:19:55.647737644Z" level=info msg="StartContainer for \"a4b267e43f449e59c58ca52994a51c71f0f565f961a988051bc3346e21b6c0c7\" returns successfully" Feb 13 07:19:55.784250 kubelet[1848]: W0213 07:19:55.784225 1848 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e46992_5c1c_4fa9_9202_cf33abc35aa7.slice/cri-containerd-6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae.scope WatchSource:0}: task 6fedf4b29c534106d099c80180ad6eab6032f203e9ce9cb8b3654e4ebfda9cae not found: not found Feb 13 07:19:55.834561 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 07:19:55.912446 kubelet[1848]: E0213 07:19:55.912391 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:55.932234 sshd[3671]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:19:56.420989 sshd[4129]: Failed password for root from 46.101.146.252 port 34500 ssh2 Feb 13 07:19:56.572075 kubelet[1848]: I0213 07:19:56.572008 1848 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-shvqv" podStartSLOduration=5.571928407 pod.CreationTimestamp="2024-02-13 07:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:19:56.57171488 +0000 UTC m=+212.018829295" watchObservedRunningTime="2024-02-13 07:19:56.571928407 +0000 UTC m=+212.019042811" Feb 13 07:19:56.912802 kubelet[1848]: E0213 07:19:56.912734 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:57.202952 sshd[4129]: Received disconnect from 46.101.146.252 port 34500:11: Bye Bye [preauth] Feb 13 07:19:57.202952 sshd[4129]: Disconnected from authenticating user root 46.101.146.252 port 34500 [preauth] Feb 13 07:19:57.203662 systemd[1]: sshd@23-139.178.90.101:22-46.101.146.252:34500.service: Deactivated successfully. Feb 13 07:19:57.807805 sshd[3671]: Failed password for root from 141.98.11.169 port 34496 ssh2 Feb 13 07:19:57.913453 kubelet[1848]: E0213 07:19:57.913343 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:58.718837 systemd-networkd[1318]: lxc_health: Link UP Feb 13 07:19:58.738606 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:19:58.738628 systemd-networkd[1318]: lxc_health: Gained carrier Feb 13 07:19:58.890776 kubelet[1848]: W0213 07:19:58.890751 1848 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e46992_5c1c_4fa9_9202_cf33abc35aa7.slice/cri-containerd-c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c.scope WatchSource:0}: task c129a64fb5bf0d029616a379780034d3a23787ccf7168883b90b1cfb35aed89c not found: not found Feb 13 07:19:58.914260 kubelet[1848]: E0213 07:19:58.914208 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:19:59.333334 sshd[3671]: Connection closed by authenticating user root 141.98.11.169 port 34496 [preauth] Feb 13 07:19:59.333963 systemd[1]: sshd@22-139.178.90.101:22-141.98.11.169:34496.service: Deactivated successfully. Feb 13 07:19:59.498919 systemd[1]: Started sshd@24-139.178.90.101:22-141.98.11.169:34140.service. Feb 13 07:19:59.915204 kubelet[1848]: E0213 07:19:59.915153 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:20:00.200746 systemd-networkd[1318]: lxc_health: Gained IPv6LL Feb 13 07:20:00.332721 sshd[4958]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:20:00.915377 kubelet[1848]: E0213 07:20:00.915334 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:20:01.916135 kubelet[1848]: E0213 07:20:01.916010 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:20:01.997370 kubelet[1848]: W0213 07:20:01.997256 1848 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e46992_5c1c_4fa9_9202_cf33abc35aa7.slice/cri-containerd-1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0.scope WatchSource:0}: task 1ec3d211f6ab86792520ee46c8e1e6e504e32fad7a31aca747ed24a2da4884d0 not found: not found Feb 13 07:20:02.560799 sshd[4958]: Failed password for root from 141.98.11.169 port 34140 ssh2 Feb 13 07:20:02.900426 update_engine[1464]: I0213 07:20:02.900190 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:20:02.901302 update_engine[1464]: I0213 07:20:02.900702 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:20:02.901302 update_engine[1464]: E0213 07:20:02.900909 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:20:02.901302 update_engine[1464]: I0213 07:20:02.901086 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 07:20:02.916385 kubelet[1848]: E0213 07:20:02.916265 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:20:03.731832 sshd[4958]: Connection closed by authenticating user root 141.98.11.169 port 34140 [preauth] Feb 13 07:20:03.734570 systemd[1]: sshd@24-139.178.90.101:22-141.98.11.169:34140.service: Deactivated successfully. Feb 13 07:20:03.899421 systemd[1]: Started sshd@25-139.178.90.101:22-141.98.11.169:53922.service. Feb 13 07:20:03.917392 kubelet[1848]: E0213 07:20:03.917349 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:20:04.735242 sshd[5065]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 07:20:04.763365 kubelet[1848]: E0213 07:20:04.763249 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:20:04.918115 kubelet[1848]: E0213 07:20:04.917998 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:20:05.109292 kubelet[1848]: W0213 07:20:05.109058 1848 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e46992_5c1c_4fa9_9202_cf33abc35aa7.slice/cri-containerd-1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255.scope WatchSource:0}: task 1c3351ca3cf23b42ed30f19d2e3d787c82b457f3a256e4c6bc189dd970bf5255 not found: not found