Feb 9 19:55:05.647063 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:55:05.647078 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:55:05.647084 kernel: Disabled fast string operations Feb 9 19:55:05.647088 kernel: BIOS-provided physical RAM map: Feb 9 19:55:05.647091 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 9 19:55:05.647095 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 9 19:55:05.647101 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 9 19:55:05.647105 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 9 19:55:05.647108 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 9 19:55:05.647112 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 9 19:55:05.647116 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 9 19:55:05.647120 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 9 19:55:05.647124 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 9 19:55:05.647127 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 19:55:05.647133 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 9 19:55:05.647138 kernel: NX (Execute Disable) protection: active Feb 9 19:55:05.647142 kernel: SMBIOS 2.7 present. Feb 9 19:55:05.647146 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 9 19:55:05.647150 kernel: vmware: hypercall mode: 0x00 Feb 9 19:55:05.647155 kernel: Hypervisor detected: VMware Feb 9 19:55:05.647160 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 9 19:55:05.647164 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 9 19:55:05.647168 kernel: vmware: using clock offset of 4343315612 ns Feb 9 19:55:05.647172 kernel: tsc: Detected 3408.000 MHz processor Feb 9 19:55:05.647177 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:55:05.647182 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:55:05.647186 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 9 19:55:05.647190 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:55:05.647194 kernel: total RAM covered: 3072M Feb 9 19:55:05.647200 kernel: Found optimal setting for mtrr clean up Feb 9 19:55:05.647204 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 9 19:55:05.647209 kernel: Using GB pages for direct mapping Feb 9 19:55:05.647213 kernel: ACPI: Early table checksum verification disabled Feb 9 19:55:05.647217 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 9 19:55:05.647222 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 9 19:55:05.647226 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 9 19:55:05.647230 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 9 19:55:05.647234 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 19:55:05.647238 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 19:55:05.647244 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 9 19:55:05.647250 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 9 19:55:05.647255 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 9 19:55:05.647259 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 9 19:55:05.647264 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 9 19:55:05.647270 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 9 19:55:05.647274 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 9 19:55:05.647279 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 9 19:55:05.647284 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 19:55:05.647288 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 19:55:05.647293 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 9 19:55:05.647297 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 9 19:55:05.647302 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 9 19:55:05.647307 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 9 19:55:05.647312 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 9 19:55:05.647317 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 9 19:55:05.647321 kernel: system APIC only can use physical flat Feb 9 19:55:05.647326 kernel: Setting APIC routing to physical flat. Feb 9 19:55:05.647330 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:55:05.647335 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 9 19:55:05.647339 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 9 19:55:05.647344 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 9 19:55:05.647348 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 9 19:55:05.647354 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 9 19:55:05.647358 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 9 19:55:05.647363 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 9 19:55:05.647367 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 9 19:55:05.647372 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 9 19:55:05.647377 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 9 19:55:05.647381 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 9 19:55:05.647386 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 9 19:55:05.647390 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 9 19:55:05.647395 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 9 19:55:05.647400 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 9 19:55:05.647405 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 9 19:55:05.647409 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 9 19:55:05.647414 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 9 19:55:05.647418 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 9 19:55:05.647423 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 9 19:55:05.647427 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 9 19:55:05.647432 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 9 19:55:05.647436 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 9 19:55:05.647441 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 9 19:55:05.647446 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 9 19:55:05.647451 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 9 19:55:05.647455 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 9 19:55:05.647460 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 9 19:55:05.647464 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 9 19:55:05.647469 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 9 19:55:05.647476 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 9 19:55:05.647481 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 9 19:55:05.647486 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 9 19:55:05.647490 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 9 19:55:05.647496 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 9 19:55:05.647500 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 9 19:55:05.647505 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 9 19:55:05.647509 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 9 19:55:05.647514 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 9 19:55:05.647518 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 9 19:55:05.647523 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 9 19:55:05.647527 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 9 19:55:05.647532 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 9 19:55:05.647536 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 9 19:55:05.647542 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 9 19:55:05.647546 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 9 19:55:05.647551 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 9 19:55:05.647555 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 9 19:55:05.647560 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 9 19:55:05.647564 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 9 19:55:05.647572 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 9 19:55:05.647584 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 9 19:55:05.647589 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 9 19:55:05.647599 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 9 19:55:05.647606 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 9 19:55:05.647611 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 9 19:55:05.647615 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 9 19:55:05.647620 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 9 19:55:05.647627 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 9 19:55:05.647633 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 9 19:55:05.647640 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 9 19:55:05.647646 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 9 19:55:05.647651 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 9 19:55:05.647656 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 9 19:55:05.647661 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 9 19:55:05.647666 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 9 19:55:05.647671 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 9 19:55:05.647676 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 9 19:55:05.647681 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 9 19:55:05.647686 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 9 19:55:05.647691 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 9 19:55:05.647695 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 9 19:55:05.647701 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 9 19:55:05.647706 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 9 19:55:05.647711 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 9 19:55:05.647716 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 9 19:55:05.647730 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 9 19:55:05.647736 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 9 19:55:05.647741 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 9 19:55:05.647746 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 9 19:55:05.647751 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 9 19:55:05.647755 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 9 19:55:05.647762 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 9 19:55:05.647767 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 9 19:55:05.647772 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 9 19:55:05.647777 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 9 19:55:05.647781 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 9 19:55:05.647786 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 9 19:55:05.647791 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 9 19:55:05.647796 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 9 19:55:05.647801 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 9 19:55:05.647806 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 9 19:55:05.647812 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 9 19:55:05.647816 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 9 19:55:05.647821 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 9 19:55:05.647826 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 9 19:55:05.647831 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 9 19:55:05.647836 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 9 19:55:05.647841 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 9 19:55:05.647845 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 9 19:55:05.647850 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 9 19:55:05.647855 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 9 19:55:05.647861 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 9 19:55:05.647866 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 9 19:55:05.647871 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 9 19:55:05.647875 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 9 19:55:05.647880 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 9 19:55:05.647885 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 9 19:55:05.647890 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 9 19:55:05.647895 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 9 19:55:05.647900 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 9 19:55:05.647905 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 9 19:55:05.647910 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 9 19:55:05.647915 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 9 19:55:05.647920 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 9 19:55:05.647925 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 9 19:55:05.647930 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 9 19:55:05.647934 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 9 19:55:05.647939 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 9 19:55:05.647944 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 9 19:55:05.647949 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 9 19:55:05.647955 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 9 19:55:05.647960 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 9 19:55:05.647964 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 9 19:55:05.647969 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 9 19:55:05.647974 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 9 19:55:05.647979 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 9 19:55:05.647984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 19:55:05.647989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 9 19:55:05.647994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 9 19:55:05.647999 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 9 19:55:05.648005 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 9 19:55:05.648010 kernel: Zone ranges: Feb 9 19:55:05.648015 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:55:05.648020 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 9 19:55:05.648025 kernel: Normal empty Feb 9 19:55:05.648030 kernel: Movable zone start for each node Feb 9 19:55:05.648035 kernel: Early memory node ranges Feb 9 19:55:05.648040 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 9 19:55:05.648045 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 9 19:55:05.648051 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 9 19:55:05.648056 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 9 19:55:05.648061 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:55:05.648065 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 9 19:55:05.648070 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 9 19:55:05.648075 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 9 19:55:05.648080 kernel: system APIC only can use physical flat Feb 9 19:55:05.648085 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 9 19:55:05.648090 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 19:55:05.648095 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 19:55:05.648101 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 19:55:05.648105 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 19:55:05.648110 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 19:55:05.648115 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 19:55:05.648120 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 19:55:05.648125 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 19:55:05.648130 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 19:55:05.648135 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 19:55:05.648140 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 19:55:05.648145 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 19:55:05.648150 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 19:55:05.648155 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 19:55:05.648160 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 19:55:05.648172 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 19:55:05.648178 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 9 19:55:05.648183 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 9 19:55:05.648191 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 9 19:55:05.648195 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 9 19:55:05.648200 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 9 19:55:05.648207 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 9 19:55:05.648214 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 9 19:55:05.648219 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 9 19:55:05.648224 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 9 19:55:05.648229 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 9 19:55:05.648234 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 9 19:55:05.648239 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 9 19:55:05.648243 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 9 19:55:05.648248 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 9 19:55:05.648254 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 9 19:55:05.648259 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 9 19:55:05.648264 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 9 19:55:05.648269 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 9 19:55:05.648274 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 9 19:55:05.648279 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 9 19:55:05.648284 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 9 19:55:05.648288 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 9 19:55:05.648293 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 9 19:55:05.648298 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 9 19:55:05.648304 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 9 19:55:05.648309 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 9 19:55:05.648314 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 9 19:55:05.648319 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 9 19:55:05.648324 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 9 19:55:05.648328 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 9 19:55:05.648333 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 9 19:55:05.648338 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 9 19:55:05.648343 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 9 19:55:05.648349 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 9 19:55:05.648354 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 9 19:55:05.648359 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 9 19:55:05.648364 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 9 19:55:05.648368 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 9 19:55:05.648373 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 9 19:55:05.648378 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 9 19:55:05.648383 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 9 19:55:05.648388 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 9 19:55:05.648393 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 9 19:55:05.648399 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 9 19:55:05.648404 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 9 19:55:05.648409 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 9 19:55:05.648414 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 9 19:55:05.648418 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 9 19:55:05.648423 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 9 19:55:05.648428 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 9 19:55:05.648433 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 9 19:55:05.648438 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 9 19:55:05.648444 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 9 19:55:05.648449 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 9 19:55:05.648454 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 9 19:55:05.648459 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 9 19:55:05.648463 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 9 19:55:05.648468 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 9 19:55:05.648474 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 9 19:55:05.648483 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 9 19:55:05.648490 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 9 19:55:05.648495 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 9 19:55:05.648501 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 9 19:55:05.648506 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 9 19:55:05.648511 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 9 19:55:05.648516 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 9 19:55:05.648521 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 9 19:55:05.648526 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 9 19:55:05.648531 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 9 19:55:05.648535 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 9 19:55:05.648540 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 9 19:55:05.648546 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 9 19:55:05.648551 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 9 19:55:05.648556 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 9 19:55:05.648561 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 9 19:55:05.648566 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 9 19:55:05.648571 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 9 19:55:05.648576 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 9 19:55:05.648581 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 9 19:55:05.648586 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 9 19:55:05.648590 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 9 19:55:05.648596 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 9 19:55:05.648601 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 9 19:55:05.648606 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 9 19:55:05.648611 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 9 19:55:05.648616 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 9 19:55:05.648621 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 9 19:55:05.648626 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 9 19:55:05.648631 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 9 19:55:05.648636 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 9 19:55:05.648642 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 9 19:55:05.648647 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 9 19:55:05.648651 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 9 19:55:05.648656 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 9 19:55:05.648661 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 9 19:55:05.648666 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 9 19:55:05.648671 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 9 19:55:05.648676 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 9 19:55:05.648681 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 9 19:55:05.648686 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 9 19:55:05.648691 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 9 19:55:05.648696 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 9 19:55:05.648701 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 9 19:55:05.648707 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 9 19:55:05.648711 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 9 19:55:05.648716 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 9 19:55:05.650058 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 9 19:55:05.650065 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 9 19:55:05.650070 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 9 19:55:05.650077 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 9 19:55:05.650082 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 9 19:55:05.650087 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:55:05.650092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 9 19:55:05.650097 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:55:05.650102 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 9 19:55:05.650107 kernel: TSC deadline timer available Feb 9 19:55:05.650113 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 9 19:55:05.650118 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 9 19:55:05.650124 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 9 19:55:05.650131 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:55:05.650137 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Feb 9 19:55:05.650143 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 19:55:05.650149 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 19:55:05.650155 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 9 19:55:05.650161 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 9 19:55:05.650166 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 9 19:55:05.650172 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 9 19:55:05.650179 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 9 19:55:05.650184 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 9 19:55:05.650189 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 9 19:55:05.650199 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 9 19:55:05.650206 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 9 19:55:05.650212 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 9 19:55:05.650217 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 9 19:55:05.650222 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 9 19:55:05.650227 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 9 19:55:05.650233 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 9 19:55:05.650239 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 9 19:55:05.650244 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 9 19:55:05.650249 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 9 19:55:05.650254 kernel: Policy zone: DMA32 Feb 9 19:55:05.650260 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:55:05.650266 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:55:05.650271 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 9 19:55:05.650278 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 9 19:55:05.650283 kernel: printk: log_buf_len min size: 262144 bytes Feb 9 19:55:05.650288 kernel: printk: log_buf_len: 1048576 bytes Feb 9 19:55:05.650294 kernel: printk: early log buf free: 239728(91%) Feb 9 19:55:05.650299 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:55:05.650305 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:55:05.650310 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:55:05.650315 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 153416K reserved, 0K cma-reserved) Feb 9 19:55:05.650321 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 9 19:55:05.650327 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:55:05.650333 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:55:05.650339 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:55:05.650345 kernel: rcu: RCU event tracing is enabled. Feb 9 19:55:05.650350 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 9 19:55:05.650356 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:55:05.650362 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:55:05.650368 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:55:05.650373 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 9 19:55:05.650378 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 9 19:55:05.650384 kernel: random: crng init done Feb 9 19:55:05.650389 kernel: Console: colour VGA+ 80x25 Feb 9 19:55:05.650394 kernel: printk: console [tty0] enabled Feb 9 19:55:05.650399 kernel: printk: console [ttyS0] enabled Feb 9 19:55:05.650405 kernel: ACPI: Core revision 20210730 Feb 9 19:55:05.650411 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 9 19:55:05.650417 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:55:05.650422 kernel: x2apic enabled Feb 9 19:55:05.650427 kernel: Switched APIC routing to physical x2apic. Feb 9 19:55:05.650433 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:55:05.650438 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 19:55:05.650444 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 9 19:55:05.650449 kernel: Disabled fast string operations Feb 9 19:55:05.650455 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:55:05.650461 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:55:05.650466 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:55:05.650472 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:55:05.650477 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 19:55:05.650483 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:55:05.650488 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 19:55:05.650494 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 19:55:05.650499 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:55:05.650504 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:55:05.650511 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:55:05.650516 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 9 19:55:05.650521 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:55:05.650527 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:55:05.650532 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:55:05.650538 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:55:05.650543 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:55:05.650548 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 9 19:55:05.650554 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:55:05.650560 kernel: pid_max: default: 131072 minimum: 1024 Feb 9 19:55:05.650565 kernel: LSM: Security Framework initializing Feb 9 19:55:05.650571 kernel: SELinux: Initializing. Feb 9 19:55:05.650577 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:55:05.650582 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:55:05.650588 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 19:55:05.650593 kernel: Performance Events: Skylake events, core PMU driver. Feb 9 19:55:05.650599 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 9 19:55:05.650605 kernel: core: CPUID marked event: 'instructions' unavailable Feb 9 19:55:05.650610 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 9 19:55:05.650615 kernel: core: CPUID marked event: 'cache references' unavailable Feb 9 19:55:05.650620 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 9 19:55:05.650625 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 9 19:55:05.650631 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 9 19:55:05.650636 kernel: ... version: 1 Feb 9 19:55:05.650641 kernel: ... bit width: 48 Feb 9 19:55:05.650646 kernel: ... generic registers: 4 Feb 9 19:55:05.650653 kernel: ... value mask: 0000ffffffffffff Feb 9 19:55:05.650658 kernel: ... max period: 000000007fffffff Feb 9 19:55:05.650663 kernel: ... fixed-purpose events: 0 Feb 9 19:55:05.650669 kernel: ... event mask: 000000000000000f Feb 9 19:55:05.650674 kernel: signal: max sigframe size: 1776 Feb 9 19:55:05.650679 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:55:05.650685 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:55:05.650690 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:55:05.650696 kernel: x86: Booting SMP configuration: Feb 9 19:55:05.650702 kernel: .... node #0, CPUs: #1 Feb 9 19:55:05.650707 kernel: Disabled fast string operations Feb 9 19:55:05.650712 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 9 19:55:05.650734 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 9 19:55:05.650741 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:55:05.650746 kernel: smpboot: Max logical packages: 128 Feb 9 19:55:05.650751 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 9 19:55:05.650757 kernel: devtmpfs: initialized Feb 9 19:55:05.650762 kernel: x86/mm: Memory block size: 128MB Feb 9 19:55:05.650767 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 9 19:55:05.650774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:55:05.650779 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 9 19:55:05.650785 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:55:05.650790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:55:05.650796 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:55:05.650801 kernel: audit: type=2000 audit(1707508503.057:1): state=initialized audit_enabled=0 res=1 Feb 9 19:55:05.650806 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:55:05.650812 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:55:05.650817 kernel: cpuidle: using governor menu Feb 9 19:55:05.650823 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 9 19:55:05.650829 kernel: ACPI: bus type PCI registered Feb 9 19:55:05.650834 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:55:05.650840 kernel: dca service started, version 1.12.1 Feb 9 19:55:05.650845 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 9 19:55:05.650850 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Feb 9 19:55:05.650856 kernel: PCI: Using configuration type 1 for base access Feb 9 19:55:05.650861 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:55:05.650866 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:55:05.650873 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:55:05.650878 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:55:05.650884 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:55:05.650889 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:55:05.650894 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:55:05.650900 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:55:05.650905 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:55:05.650910 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:55:05.650916 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:55:05.650922 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 19:55:05.650928 kernel: ACPI: Interpreter enabled Feb 9 19:55:05.650933 kernel: ACPI: PM: (supports S0 S1 S5) Feb 9 19:55:05.650938 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:55:05.650944 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:55:05.650949 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 9 19:55:05.650954 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 9 19:55:05.651029 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:55:05.651078 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 9 19:55:05.651122 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 9 19:55:05.651129 kernel: PCI host bridge to bus 0000:00 Feb 9 19:55:05.651174 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:55:05.651214 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Feb 9 19:55:05.651253 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Feb 9 19:55:05.651291 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Feb 9 19:55:05.651331 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Feb 9 19:55:05.651369 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 9 19:55:05.651408 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:55:05.651446 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 9 19:55:05.651491 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 9 19:55:05.651542 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 9 19:55:05.651592 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 9 19:55:05.651645 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 9 19:55:05.651692 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 9 19:55:05.651746 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 9 19:55:05.651792 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:55:05.651836 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:55:05.651880 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:55:05.651927 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:55:05.651976 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 9 19:55:05.652020 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 9 19:55:05.652064 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 9 19:55:05.652112 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 9 19:55:05.652157 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 9 19:55:05.652204 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 9 19:55:05.652252 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 9 19:55:05.652297 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 9 19:55:05.652341 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 9 19:55:05.652384 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 9 19:55:05.652427 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 9 19:55:05.652470 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:55:05.652522 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 9 19:55:05.652574 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.652619 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.652668 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.652713 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664107 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664160 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664214 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664261 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664311 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664356 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664405 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664449 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664536 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664582 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664630 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664675 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664734 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664783 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664834 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664879 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.664926 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.664970 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665018 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665064 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665112 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665158 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665208 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665266 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665460 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665518 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665575 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665625 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665677 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665775 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665833 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665883 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.665939 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.665989 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666041 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666091 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666144 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666206 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666266 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666316 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666368 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666418 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666470 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666540 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666612 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666661 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666714 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666775 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666828 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666877 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.666930 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.666982 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.667035 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.667084 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.667138 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.667189 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.667242 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.667293 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.667345 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:05.667395 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.667448 kernel: pci_bus 0000:01: extended config space not accessible Feb 9 19:55:05.667502 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 19:55:05.667554 kernel: pci_bus 0000:02: extended config space not accessible Feb 9 19:55:05.667564 kernel: acpiphp: Slot [32] registered Feb 9 19:55:05.667570 kernel: acpiphp: Slot [33] registered Feb 9 19:55:05.667576 kernel: acpiphp: Slot [34] registered Feb 9 19:55:05.667581 kernel: acpiphp: Slot [35] registered Feb 9 19:55:05.667587 kernel: acpiphp: Slot [36] registered Feb 9 19:55:05.667592 kernel: acpiphp: Slot [37] registered Feb 9 19:55:05.667598 kernel: acpiphp: Slot [38] registered Feb 9 19:55:05.667603 kernel: acpiphp: Slot [39] registered Feb 9 19:55:05.667609 kernel: acpiphp: Slot [40] registered Feb 9 19:55:05.667615 kernel: acpiphp: Slot [41] registered Feb 9 19:55:05.667621 kernel: acpiphp: Slot [42] registered Feb 9 19:55:05.667626 kernel: acpiphp: Slot [43] registered Feb 9 19:55:05.667631 kernel: acpiphp: Slot [44] registered Feb 9 19:55:05.667637 kernel: acpiphp: Slot [45] registered Feb 9 19:55:05.667642 kernel: acpiphp: Slot [46] registered Feb 9 19:55:05.667648 kernel: acpiphp: Slot [47] registered Feb 9 19:55:05.667653 kernel: acpiphp: Slot [48] registered Feb 9 19:55:05.667658 kernel: acpiphp: Slot [49] registered Feb 9 19:55:05.667664 kernel: acpiphp: Slot [50] registered Feb 9 19:55:05.667670 kernel: acpiphp: Slot [51] registered Feb 9 19:55:05.667676 kernel: acpiphp: Slot [52] registered Feb 9 19:55:05.667681 kernel: acpiphp: Slot [53] registered Feb 9 19:55:05.667686 kernel: acpiphp: Slot [54] registered Feb 9 19:55:05.667692 kernel: acpiphp: Slot [55] registered Feb 9 19:55:05.667697 kernel: acpiphp: Slot [56] registered Feb 9 19:55:05.667702 kernel: acpiphp: Slot [57] registered Feb 9 19:55:05.667708 kernel: acpiphp: Slot [58] registered Feb 9 19:55:05.667713 kernel: acpiphp: Slot [59] registered Feb 9 19:55:05.667727 kernel: acpiphp: Slot [60] registered Feb 9 19:55:05.667734 kernel: acpiphp: Slot [61] registered Feb 9 19:55:05.667739 kernel: acpiphp: Slot [62] registered Feb 9 19:55:05.667744 kernel: acpiphp: Slot [63] registered Feb 9 19:55:05.667818 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 9 19:55:05.667884 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 19:55:05.667932 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 19:55:05.667981 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:55:05.668030 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 9 19:55:05.668082 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Feb 9 19:55:05.668131 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Feb 9 19:55:05.668179 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Feb 9 19:55:05.668228 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Feb 9 19:55:05.668276 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 9 19:55:05.668324 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 9 19:55:05.668372 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 9 19:55:05.668430 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 9 19:55:05.668503 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 9 19:55:05.668557 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 9 19:55:05.668608 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 19:55:05.668657 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:05.668707 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 19:55:05.670815 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 19:55:05.670877 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 19:55:05.670929 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 19:55:05.670980 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 19:55:05.671030 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 19:55:05.671079 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 19:55:05.671128 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:55:05.671177 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 19:55:05.671225 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 19:55:05.671275 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 19:55:05.671322 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:55:05.671371 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 19:55:05.671419 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 19:55:05.671467 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:55:05.671524 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 19:55:05.671573 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 19:55:05.671620 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:55:05.671668 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 19:55:05.671715 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 19:55:05.671778 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:55:05.671827 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 19:55:05.671878 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 19:55:05.678777 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:55:05.678834 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 19:55:05.678881 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 19:55:05.678927 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:55:05.678980 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 9 19:55:05.679028 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 9 19:55:05.679076 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 9 19:55:05.679121 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 9 19:55:05.679167 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 9 19:55:05.679212 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 19:55:05.679258 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 9 19:55:05.679304 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:55:05.679350 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 19:55:05.679396 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 19:55:05.679442 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 19:55:05.679495 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 19:55:05.679542 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 19:55:05.679587 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 19:55:05.679631 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 19:55:05.679676 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:55:05.679734 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 19:55:05.679785 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 19:55:05.679834 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 19:55:05.679879 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:55:05.679926 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 19:55:05.679970 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 19:55:05.680015 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:55:05.680061 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 19:55:05.680106 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 19:55:05.680150 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:55:05.680198 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 19:55:05.680243 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 19:55:05.680286 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:55:05.680331 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 19:55:05.680376 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 19:55:05.680419 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:55:05.680464 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 19:55:05.680546 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 19:55:05.680593 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:55:05.680637 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 19:55:05.680681 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 19:55:05.680735 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 19:55:05.680790 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:55:05.680836 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 19:55:05.680881 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 19:55:05.680928 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 19:55:05.680972 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:55:05.681017 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 19:55:05.681061 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 19:55:05.681105 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 19:55:05.681149 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:55:05.681194 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 19:55:05.681238 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 19:55:05.681285 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:55:05.681330 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 19:55:05.681374 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 19:55:05.681417 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:55:05.681462 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 19:55:05.681505 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 19:55:05.681550 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:55:05.681595 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 19:55:05.681641 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 19:55:05.681685 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:55:05.681743 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 19:55:05.681790 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 19:55:05.681834 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:55:05.681880 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 19:55:05.681924 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 19:55:05.681968 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 19:55:05.682015 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:55:05.682060 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 19:55:05.682104 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 19:55:05.682149 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 19:55:05.682192 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:55:05.682238 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 19:55:05.682282 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 19:55:05.682328 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:55:05.682373 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 19:55:05.682417 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 19:55:05.682462 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:55:05.682512 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 19:55:05.682556 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 19:55:05.682600 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:55:05.682645 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 19:55:05.682691 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 19:55:05.682742 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:55:05.682788 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 19:55:05.682831 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 19:55:05.682875 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:55:05.682918 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 19:55:05.682963 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 19:55:05.683006 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:55:05.683013 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 9 19:55:05.683021 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 9 19:55:05.683026 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 9 19:55:05.683032 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:55:05.683037 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 9 19:55:05.683043 kernel: iommu: Default domain type: Translated Feb 9 19:55:05.683049 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:55:05.683093 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 9 19:55:05.683136 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:55:05.683182 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 9 19:55:05.683190 kernel: vgaarb: loaded Feb 9 19:55:05.683196 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:55:05.683201 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:55:05.683207 kernel: PTP clock support registered Feb 9 19:55:05.683212 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:55:05.683218 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:55:05.683223 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 9 19:55:05.683228 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 9 19:55:05.683236 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 9 19:55:05.683241 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 9 19:55:05.683247 kernel: clocksource: Switched to clocksource tsc-early Feb 9 19:55:05.683252 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:55:05.683258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:55:05.683263 kernel: pnp: PnP ACPI init Feb 9 19:55:05.683311 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 9 19:55:05.683352 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 9 19:55:05.683394 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 9 19:55:05.683436 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 9 19:55:05.683485 kernel: pnp 00:06: [dma 2] Feb 9 19:55:05.683530 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 9 19:55:05.683571 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 9 19:55:05.683611 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 9 19:55:05.683618 kernel: pnp: PnP ACPI: found 8 devices Feb 9 19:55:05.683625 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:55:05.683631 kernel: NET: Registered PF_INET protocol family Feb 9 19:55:05.683637 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:55:05.683643 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:55:05.683648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:55:05.683653 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:55:05.683659 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:55:05.683664 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:55:05.683670 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:55:05.683677 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:55:05.683683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:55:05.683688 kernel: NET: Registered PF_XDP protocol family Feb 9 19:55:05.683745 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 9 19:55:05.683794 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 9 19:55:05.683840 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 9 19:55:05.683885 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 9 19:55:05.683933 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 9 19:55:05.683978 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 9 19:55:05.684023 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 9 19:55:05.684068 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 9 19:55:05.684113 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 9 19:55:05.684157 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 9 19:55:05.684203 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 9 19:55:05.684248 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 9 19:55:05.684292 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 9 19:55:05.684337 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 9 19:55:05.684382 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 9 19:55:05.684426 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 9 19:55:05.684476 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 9 19:55:05.684556 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 9 19:55:05.684600 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 9 19:55:05.684645 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 9 19:55:05.684689 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 9 19:55:05.684746 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 9 19:55:05.684793 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 9 19:55:05.684837 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:55:05.684882 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:55:05.684925 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.684970 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685014 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685061 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685105 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685150 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685193 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685238 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685282 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685326 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685370 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685416 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685461 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685505 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685549 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685593 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685637 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685680 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685735 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685783 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685830 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685875 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.685918 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.685962 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686006 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686049 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686093 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686137 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686183 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686228 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686272 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686316 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686359 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686404 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686448 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686520 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686581 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686625 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.686669 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.686712 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.691361 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.691416 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.691464 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.691509 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.691558 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.691602 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.691646 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.691691 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.691752 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.691799 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.691843 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.691887 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.691954 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692251 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692306 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692353 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692398 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692443 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692505 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692563 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692608 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692652 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692696 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692751 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692796 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692840 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692884 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.692927 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.692970 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693015 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693059 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693102 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693149 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693193 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693237 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693282 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693325 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693370 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693414 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693458 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693506 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693551 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693597 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:05.693641 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:05.693686 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 19:55:05.693742 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 9 19:55:05.693790 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 19:55:05.693834 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 19:55:05.693878 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:55:05.693926 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 9 19:55:05.693973 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 19:55:05.694018 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 19:55:05.694062 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 19:55:05.694107 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:55:05.694152 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 19:55:05.694197 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 19:55:05.694241 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 19:55:05.694286 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:55:05.694331 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 19:55:05.694377 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 19:55:05.694420 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 19:55:05.694464 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:55:05.694520 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 19:55:05.694565 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 19:55:05.694609 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:55:05.694655 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 19:55:05.694699 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 19:55:05.694750 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:55:05.694794 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 19:55:05.694839 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 19:55:05.694883 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:55:05.694927 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 19:55:05.694971 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 19:55:05.695015 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:55:05.695058 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 19:55:05.695105 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 19:55:05.695184 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:55:05.695268 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 9 19:55:05.695315 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 19:55:05.695393 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 19:55:05.695443 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 19:55:05.695488 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:55:05.695533 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 19:55:05.695577 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 19:55:05.695624 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 19:55:05.695670 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:55:05.695714 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 19:55:05.695775 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 19:55:05.695820 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 19:55:05.695865 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:55:05.695909 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 19:55:05.696170 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 19:55:05.696222 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:55:05.696271 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 19:55:05.696321 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 19:55:05.696367 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:55:05.696413 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 19:55:05.696458 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 19:55:05.696525 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:55:05.696584 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 19:55:05.696628 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 19:55:05.696673 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:55:05.696717 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 19:55:05.696788 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 19:55:05.696832 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:55:05.696876 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 19:55:05.696920 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 19:55:05.696963 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 19:55:05.697007 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:55:05.697260 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 19:55:05.697310 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 19:55:05.697356 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 19:55:05.697407 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:55:05.697453 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 19:55:05.697536 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 19:55:05.697581 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 19:55:05.697626 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:55:05.697671 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 19:55:05.697716 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 19:55:05.698037 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:55:05.698287 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 19:55:05.698338 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 19:55:05.698387 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:55:05.698438 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 19:55:05.698484 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 19:55:05.698528 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:55:05.698573 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 19:55:05.698618 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 19:55:05.698662 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:55:05.698706 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 19:55:05.698767 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 19:55:05.698816 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:55:05.698862 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 19:55:05.698907 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 19:55:05.698951 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 19:55:05.698995 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:55:05.699040 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 19:55:05.699084 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 19:55:05.699129 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 19:55:05.699173 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:55:05.699218 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 19:55:05.699265 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 19:55:05.699309 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:55:05.699353 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 19:55:05.699397 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 19:55:05.699441 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:55:05.699485 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 19:55:05.699529 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 19:55:05.699573 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:55:05.699617 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 19:55:05.699663 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 19:55:05.699708 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:55:05.699982 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 19:55:05.700031 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 19:55:05.700077 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:55:05.700379 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 19:55:05.700431 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 19:55:05.700503 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:55:05.700828 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 19:55:05.700875 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 19:55:05.700915 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 19:55:05.700954 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 19:55:05.700993 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 19:55:05.701031 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 19:55:05.701070 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Feb 9 19:55:05.701108 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Feb 9 19:55:05.701153 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 9 19:55:05.701194 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 9 19:55:05.701234 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:55:05.701275 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 19:55:05.701314 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 19:55:05.701355 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 19:55:05.701395 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 19:55:05.701437 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 19:55:05.701477 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 19:55:05.701517 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Feb 9 19:55:05.701796 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Feb 9 19:55:05.701846 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 9 19:55:05.701889 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 9 19:55:05.701952 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:55:05.702197 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 9 19:55:05.702246 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 9 19:55:05.702288 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:55:05.702339 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 9 19:55:05.702380 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 9 19:55:05.702421 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:55:05.702465 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 9 19:55:05.702508 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:55:05.702552 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 9 19:55:05.702592 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:55:05.702639 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 9 19:55:05.702687 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:55:05.702741 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 9 19:55:05.702786 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:55:05.702831 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 9 19:55:05.702872 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:55:05.702918 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 9 19:55:05.702960 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 9 19:55:05.703000 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:55:05.703045 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 9 19:55:05.703088 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 9 19:55:05.703128 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:55:05.703172 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 9 19:55:05.703212 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 9 19:55:05.703253 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:55:05.703297 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 9 19:55:05.703340 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:55:05.703385 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 9 19:55:05.703426 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:55:05.703470 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 9 19:55:05.703523 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:55:05.703569 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 9 19:55:05.703614 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:55:05.703658 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 9 19:55:05.703700 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:55:05.703756 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 9 19:55:05.703799 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 9 19:55:05.703840 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:55:05.703884 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 9 19:55:05.703928 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 9 19:55:05.703969 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:55:05.704013 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 9 19:55:05.704055 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 9 19:55:05.704096 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:55:05.704140 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 9 19:55:05.704181 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:55:05.704228 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 9 19:55:05.704269 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:55:05.704316 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 9 19:55:05.704357 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:55:05.704403 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 9 19:55:05.704445 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:55:05.704532 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 9 19:55:05.704589 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:55:05.704634 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 9 19:55:05.704676 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 9 19:55:05.704744 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:55:05.704827 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 9 19:55:05.705095 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 9 19:55:05.705147 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:55:05.705387 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 9 19:55:05.705433 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:55:05.705481 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 9 19:55:05.705524 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:55:05.705572 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 9 19:55:05.705614 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:55:05.705659 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 9 19:55:05.705700 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:55:05.705755 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 9 19:55:05.705798 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:55:05.705845 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 9 19:55:05.705888 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:55:05.705936 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:55:05.705945 kernel: PCI: CLS 32 bytes, default 64 Feb 9 19:55:05.705951 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:55:05.705957 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 19:55:05.705963 kernel: clocksource: Switched to clocksource tsc Feb 9 19:55:05.705969 kernel: Initialise system trusted keyrings Feb 9 19:55:05.705977 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:55:05.705983 kernel: Key type asymmetric registered Feb 9 19:55:05.705989 kernel: Asymmetric key parser 'x509' registered Feb 9 19:55:05.705994 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:55:05.706000 kernel: io scheduler mq-deadline registered Feb 9 19:55:05.706006 kernel: io scheduler kyber registered Feb 9 19:55:05.706012 kernel: io scheduler bfq registered Feb 9 19:55:05.706057 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 9 19:55:05.706103 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.706151 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 9 19:55:05.706197 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.706243 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 9 19:55:05.706288 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.706333 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 9 19:55:05.706378 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.706425 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 9 19:55:05.706469 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.706555 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 9 19:55:05.706601 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.706648 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 9 19:55:05.706692 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.707013 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 9 19:55:05.707066 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.707113 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 9 19:55:05.707159 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.707204 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 9 19:55:05.707248 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.707296 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 9 19:55:05.707341 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.707385 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 9 19:55:05.707429 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.707478 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 9 19:55:05.707632 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.707685 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 9 19:55:05.707743 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708054 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 9 19:55:05.708105 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708151 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 9 19:55:05.708200 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708245 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 9 19:55:05.708289 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708333 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 9 19:55:05.708378 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708423 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 9 19:55:05.708467 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708514 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 9 19:55:05.708559 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708604 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 9 19:55:05.708648 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708692 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 9 19:55:05.708774 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708821 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 9 19:55:05.708866 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.708911 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 9 19:55:05.709093 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.709337 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 9 19:55:05.709391 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.709706 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 9 19:55:05.709821 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.709870 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 9 19:55:05.710017 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.710070 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 9 19:55:05.710119 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.710438 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 9 19:55:05.710492 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.710538 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 9 19:55:05.710582 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.710990 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 9 19:55:05.711042 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.711088 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 9 19:55:05.711136 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:05.711145 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:55:05.711151 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:55:05.711159 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:55:05.711165 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 9 19:55:05.711171 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:55:05.711177 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:55:05.711222 kernel: rtc_cmos 00:01: registered as rtc0 Feb 9 19:55:05.711527 kernel: rtc_cmos 00:01: setting system clock to 2024-02-09T19:55:05 UTC (1707508505) Feb 9 19:55:05.711575 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 9 19:55:05.711584 kernel: fail to initialize ptp_kvm Feb 9 19:55:05.711592 kernel: intel_pstate: CPU model not supported Feb 9 19:55:05.711598 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:55:05.711604 kernel: Segment Routing with IPv6 Feb 9 19:55:05.711610 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:55:05.711616 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:55:05.711622 kernel: Key type dns_resolver registered Feb 9 19:55:05.711627 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:55:05.711633 kernel: IPI shorthand broadcast: enabled Feb 9 19:55:05.711639 kernel: sched_clock: Marking stable (834178379, 219674546)->(1117424324, -63571399) Feb 9 19:55:05.711647 kernel: registered taskstats version 1 Feb 9 19:55:05.711652 kernel: Loading compiled-in X.509 certificates Feb 9 19:55:05.711658 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:55:05.711664 kernel: Key type .fscrypt registered Feb 9 19:55:05.711670 kernel: Key type fscrypt-provisioning registered Feb 9 19:55:05.711676 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:55:05.711681 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:55:05.711687 kernel: ima: No architecture policies found Feb 9 19:55:05.711693 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:55:05.711700 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:55:05.711706 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:55:05.711712 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:55:05.711830 kernel: Run /init as init process Feb 9 19:55:05.711842 kernel: with arguments: Feb 9 19:55:05.711849 kernel: /init Feb 9 19:55:05.711855 kernel: with environment: Feb 9 19:55:05.711861 kernel: HOME=/ Feb 9 19:55:05.711866 kernel: TERM=linux Feb 9 19:55:05.711872 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:55:05.711881 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:55:05.711889 systemd[1]: Detected virtualization vmware. Feb 9 19:55:05.711895 systemd[1]: Detected architecture x86-64. Feb 9 19:55:05.711901 systemd[1]: Running in initrd. Feb 9 19:55:05.711907 systemd[1]: No hostname configured, using default hostname. Feb 9 19:55:05.711913 systemd[1]: Hostname set to . Feb 9 19:55:05.711919 systemd[1]: Initializing machine ID from random generator. Feb 9 19:55:05.711927 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:55:05.711933 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:55:05.711940 systemd[1]: Reached target cryptsetup.target. Feb 9 19:55:05.711946 systemd[1]: Reached target paths.target. Feb 9 19:55:05.711952 systemd[1]: Reached target slices.target. Feb 9 19:55:05.711958 systemd[1]: Reached target swap.target. Feb 9 19:55:05.712259 systemd[1]: Reached target timers.target. Feb 9 19:55:05.712270 systemd[1]: Listening on iscsid.socket. Feb 9 19:55:05.712279 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:55:05.712285 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:55:05.712291 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:55:05.712297 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:55:05.712303 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:55:05.712310 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:55:05.712316 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:55:05.712322 systemd[1]: Reached target sockets.target. Feb 9 19:55:05.712330 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:55:05.712336 systemd[1]: Finished network-cleanup.service. Feb 9 19:55:05.712342 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:55:05.712348 systemd[1]: Starting systemd-journald.service... Feb 9 19:55:05.712354 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:55:05.712360 systemd[1]: Starting systemd-resolved.service... Feb 9 19:55:05.712366 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:55:05.712372 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:55:05.712379 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:55:05.712386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:55:05.712392 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:55:05.712398 kernel: audit: type=1130 audit(1707508505.651:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.712404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:55:05.712411 kernel: audit: type=1130 audit(1707508505.655:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.712416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:55:05.712423 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:55:05.712429 kernel: Bridge firewalling registered Feb 9 19:55:05.712436 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:55:05.712442 kernel: audit: type=1130 audit(1707508505.672:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.712586 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:55:05.712594 systemd[1]: Started systemd-resolved.service. Feb 9 19:55:05.712601 kernel: audit: type=1130 audit(1707508505.686:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.712607 systemd[1]: Reached target nss-lookup.target. Feb 9 19:55:05.712616 kernel: SCSI subsystem initialized Feb 9 19:55:05.712622 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:55:05.717743 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:55:05.717752 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:55:05.717759 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:55:05.717766 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:55:05.717773 kernel: audit: type=1130 audit(1707508505.712:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.717784 systemd-journald[217]: Journal started Feb 9 19:55:05.717816 systemd-journald[217]: Runtime Journal (/run/log/journal/5404bd9575d64d0b863075ef9bc3eb35) is 4.8M, max 38.8M, 34.0M free. Feb 9 19:55:05.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.720820 systemd[1]: Started systemd-journald.service. Feb 9 19:55:05.720832 kernel: audit: type=1130 audit(1707508505.716:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.646867 systemd-modules-load[218]: Inserted module 'overlay' Feb 9 19:55:05.673331 systemd-modules-load[218]: Inserted module 'br_netfilter' Feb 9 19:55:05.677208 systemd-resolved[219]: Positive Trust Anchors: Feb 9 19:55:05.677212 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:55:05.677231 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:55:05.724343 kernel: audit: type=1130 audit(1707508505.719:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.686910 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 9 19:55:05.712666 systemd-modules-load[218]: Inserted module 'dm_multipath' Feb 9 19:55:05.721562 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:55:05.725253 dracut-cmdline[232]: dracut-dracut-053 Feb 9 19:55:05.725253 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 19:55:05.725253 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:55:05.737729 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:55:05.744729 kernel: iscsi: registered transport (tcp) Feb 9 19:55:05.758084 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:55:05.758108 kernel: QLogic iSCSI HBA Driver Feb 9 19:55:05.773526 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:55:05.776734 kernel: audit: type=1130 audit(1707508505.771:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:05.774114 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:55:05.809761 kernel: raid6: avx2x4 gen() 50263 MB/s Feb 9 19:55:05.826758 kernel: raid6: avx2x4 xor() 22422 MB/s Feb 9 19:55:05.843764 kernel: raid6: avx2x2 gen() 55664 MB/s Feb 9 19:55:05.860762 kernel: raid6: avx2x2 xor() 32819 MB/s Feb 9 19:55:05.877761 kernel: raid6: avx2x1 gen() 46316 MB/s Feb 9 19:55:05.894742 kernel: raid6: avx2x1 xor() 28311 MB/s Feb 9 19:55:05.911739 kernel: raid6: sse2x4 gen() 21011 MB/s Feb 9 19:55:05.928736 kernel: raid6: sse2x4 xor() 11731 MB/s Feb 9 19:55:05.945737 kernel: raid6: sse2x2 gen() 21486 MB/s Feb 9 19:55:05.962768 kernel: raid6: sse2x2 xor() 13365 MB/s Feb 9 19:55:05.979738 kernel: raid6: sse2x1 gen() 17753 MB/s Feb 9 19:55:05.996927 kernel: raid6: sse2x1 xor() 8972 MB/s Feb 9 19:55:05.996949 kernel: raid6: using algorithm avx2x2 gen() 55664 MB/s Feb 9 19:55:05.996960 kernel: raid6: .... xor() 32819 MB/s, rmw enabled Feb 9 19:55:05.998090 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:55:06.006734 kernel: xor: automatically using best checksumming function avx Feb 9 19:55:06.064737 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:55:06.068747 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:55:06.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:06.069387 systemd[1]: Starting systemd-udevd.service... Feb 9 19:55:06.067000 audit: BPF prog-id=7 op=LOAD Feb 9 19:55:06.067000 audit: BPF prog-id=8 op=LOAD Feb 9 19:55:06.075734 kernel: audit: type=1130 audit(1707508506.067:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:06.079368 systemd-udevd[416]: Using default interface naming scheme 'v252'. Feb 9 19:55:06.081919 systemd[1]: Started systemd-udevd.service. Feb 9 19:55:06.082359 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:55:06.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:06.090006 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Feb 9 19:55:06.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:06.105156 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:55:06.105648 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:55:06.163733 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:55:06.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:06.213734 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 9 19:55:06.217690 kernel: vmw_pvscsi: using 64bit dma Feb 9 19:55:06.217711 kernel: vmw_pvscsi: max_id: 16 Feb 9 19:55:06.217727 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 9 19:55:06.234510 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 9 19:55:06.234531 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 9 19:55:06.234539 kernel: vmw_pvscsi: using MSI-X Feb 9 19:55:06.234547 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 9 19:55:06.235344 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Feb 9 19:55:06.236683 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 9 19:55:06.238208 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Feb 9 19:55:06.238223 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:55:06.240742 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 9 19:55:06.249809 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:55:06.250730 kernel: AES CTR mode by8 optimization enabled Feb 9 19:55:06.253733 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 9 19:55:06.256734 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 9 19:55:06.260251 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 9 19:55:06.260361 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:55:06.260426 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 9 19:55:06.260488 kernel: sd 0:0:0:0: [sda] Cache data unavailable Feb 9 19:55:06.260543 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Feb 9 19:55:06.266736 kernel: libata version 3.00 loaded. Feb 9 19:55:06.271285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:06.271729 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:55:06.276735 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 9 19:55:06.278104 kernel: scsi host1: ata_piix Feb 9 19:55:06.278185 kernel: scsi host2: ata_piix Feb 9 19:55:06.279033 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 9 19:55:06.279051 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 9 19:55:06.296604 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:55:06.297729 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (464) Feb 9 19:55:06.304323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:55:06.308971 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:55:06.310556 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:55:06.310682 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:55:06.311314 systemd[1]: Starting disk-uuid.service... Feb 9 19:55:06.404749 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:06.409738 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:06.451736 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 9 19:55:06.456737 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 9 19:55:06.482744 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 9 19:55:06.482878 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:55:06.499738 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:55:07.412739 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:07.412799 disk-uuid[540]: The operation has completed successfully. Feb 9 19:55:07.443441 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:55:07.443739 systemd[1]: Finished disk-uuid.service. Feb 9 19:55:07.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.444470 systemd[1]: Starting verity-setup.service... Feb 9 19:55:07.454734 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:55:07.494187 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:55:07.495403 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:55:07.495812 systemd[1]: Finished verity-setup.service. Feb 9 19:55:07.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.555628 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:55:07.555785 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:55:07.556384 systemd[1]: Starting afterburn-network-kargs.service... Feb 9 19:55:07.557074 systemd[1]: Starting ignition-setup.service... Feb 9 19:55:07.573870 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:55:07.573905 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:55:07.573913 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:55:07.578731 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:55:07.584846 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:55:07.592695 systemd[1]: Finished ignition-setup.service. Feb 9 19:55:07.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.593319 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:55:07.639627 systemd[1]: Finished afterburn-network-kargs.service. Feb 9 19:55:07.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.640235 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:55:07.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.689000 audit: BPF prog-id=9 op=LOAD Feb 9 19:55:07.690613 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:55:07.691472 systemd[1]: Starting systemd-networkd.service... Feb 9 19:55:07.709767 systemd-networkd[735]: lo: Link UP Feb 9 19:55:07.709774 systemd-networkd[735]: lo: Gained carrier Feb 9 19:55:07.710208 systemd-networkd[735]: Enumeration completed Feb 9 19:55:07.715017 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 19:55:07.715139 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 19:55:07.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.710366 systemd[1]: Started systemd-networkd.service. Feb 9 19:55:07.710525 systemd[1]: Reached target network.target. Feb 9 19:55:07.710790 systemd-networkd[735]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 9 19:55:07.711041 systemd[1]: Starting iscsiuio.service... Feb 9 19:55:07.714250 systemd[1]: Started iscsiuio.service. Feb 9 19:55:07.714920 systemd[1]: Starting iscsid.service... Feb 9 19:55:07.716792 systemd-networkd[735]: ens192: Link UP Feb 9 19:55:07.717092 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:55:07.717092 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:55:07.717092 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:55:07.717092 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:55:07.717092 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:55:07.717092 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:55:07.716796 systemd-networkd[735]: ens192: Gained carrier Feb 9 19:55:07.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.718398 systemd[1]: Started iscsid.service. Feb 9 19:55:07.718951 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:55:07.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.727067 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:55:07.726586 ignition[607]: Ignition 2.14.0 Feb 9 19:55:07.727239 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:55:07.726594 ignition[607]: Stage: fetch-offline Feb 9 19:55:07.727786 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:55:07.726624 ignition[607]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:07.728077 systemd[1]: Reached target remote-fs.target. Feb 9 19:55:07.726640 ignition[607]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:07.730416 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:55:07.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.736541 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:55:07.736955 ignition[607]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:07.737943 ignition[607]: parsed url from cmdline: "" Feb 9 19:55:07.737949 ignition[607]: no config URL provided Feb 9 19:55:07.737953 ignition[607]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:55:07.737959 ignition[607]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:55:07.738323 ignition[607]: config successfully fetched Feb 9 19:55:07.738342 ignition[607]: parsing config with SHA512: 773883d6fcba1b7a775b5b9825bb26ee4443ed1d7a14fd2d40c2bf240db7a2b42a70a190c144fce260b5f91c14087608907c996643705fd58b9f3ee500da1b32 Feb 9 19:55:07.751780 unknown[607]: fetched base config from "system" Feb 9 19:55:07.751788 unknown[607]: fetched user config from "vmware" Feb 9 19:55:07.752124 ignition[607]: fetch-offline: fetch-offline passed Feb 9 19:55:07.752167 ignition[607]: Ignition finished successfully Feb 9 19:55:07.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.752913 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:55:07.753059 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:55:07.753506 systemd[1]: Starting ignition-kargs.service... Feb 9 19:55:07.759037 ignition[755]: Ignition 2.14.0 Feb 9 19:55:07.759299 ignition[755]: Stage: kargs Feb 9 19:55:07.759467 ignition[755]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:07.759617 ignition[755]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:07.760915 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:07.762511 ignition[755]: kargs: kargs passed Feb 9 19:55:07.762655 ignition[755]: Ignition finished successfully Feb 9 19:55:07.763554 systemd[1]: Finished ignition-kargs.service. Feb 9 19:55:07.764150 systemd[1]: Starting ignition-disks.service... Feb 9 19:55:07.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.768920 ignition[761]: Ignition 2.14.0 Feb 9 19:55:07.769167 ignition[761]: Stage: disks Feb 9 19:55:07.769330 ignition[761]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:07.769479 ignition[761]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:07.770738 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:07.772257 ignition[761]: disks: disks passed Feb 9 19:55:07.772296 ignition[761]: Ignition finished successfully Feb 9 19:55:07.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.772932 systemd[1]: Finished ignition-disks.service. Feb 9 19:55:07.773078 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:55:07.773167 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:55:07.773248 systemd[1]: Reached target local-fs.target. Feb 9 19:55:07.773326 systemd[1]: Reached target sysinit.target. Feb 9 19:55:07.773402 systemd[1]: Reached target basic.target. Feb 9 19:55:07.773936 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:55:07.784017 systemd-fsck[769]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:55:07.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.785306 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:55:07.785813 systemd[1]: Mounting sysroot.mount... Feb 9 19:55:07.800384 systemd[1]: Mounted sysroot.mount. Feb 9 19:55:07.800668 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:55:07.800816 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:55:07.801652 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:55:07.802182 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:55:07.802411 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:55:07.802653 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:55:07.803647 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:55:07.804412 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:55:07.807254 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:55:07.810613 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:55:07.812906 initrd-setup-root[795]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:55:07.815180 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:55:07.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.843391 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:55:07.843960 systemd[1]: Starting ignition-mount.service... Feb 9 19:55:07.844391 systemd[1]: Starting sysroot-boot.service... Feb 9 19:55:07.848223 bash[820]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:55:07.854253 ignition[821]: INFO : Ignition 2.14.0 Feb 9 19:55:07.854554 ignition[821]: INFO : Stage: mount Feb 9 19:55:07.855679 ignition[821]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:07.855872 ignition[821]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:07.857495 ignition[821]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:07.859155 ignition[821]: INFO : mount: mount passed Feb 9 19:55:07.859303 ignition[821]: INFO : Ignition finished successfully Feb 9 19:55:07.859869 systemd[1]: Finished ignition-mount.service. Feb 9 19:55:07.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:07.865985 systemd[1]: Finished sysroot-boot.service. Feb 9 19:55:08.509899 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:55:08.518747 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (830) Feb 9 19:55:08.521172 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:55:08.521191 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:55:08.521202 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:55:08.525735 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:55:08.527672 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:55:08.528425 systemd[1]: Starting ignition-files.service... Feb 9 19:55:08.540270 ignition[850]: INFO : Ignition 2.14.0 Feb 9 19:55:08.540270 ignition[850]: INFO : Stage: files Feb 9 19:55:08.540689 ignition[850]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:08.540689 ignition[850]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:08.542114 ignition[850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:08.544639 ignition[850]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:55:08.545103 ignition[850]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:55:08.545103 ignition[850]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:55:08.549349 ignition[850]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:55:08.549648 ignition[850]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:55:08.550471 unknown[850]: wrote ssh authorized keys file for user: core Feb 9 19:55:08.550739 ignition[850]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:55:08.551209 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:55:08.551454 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:55:09.598913 systemd-networkd[735]: ens192: Gained IPv6LL Feb 9 19:55:14.050663 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:55:14.161672 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:55:14.162024 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:55:14.162267 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:55:14.162481 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:55:14.543087 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:55:14.608624 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:55:14.608989 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:55:14.609415 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:55:14.609628 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:55:14.674128 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:55:14.851880 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:55:14.852188 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:55:14.852188 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:55:14.852188 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:55:14.906874 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:55:15.412566 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:55:15.412992 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:55:15.412992 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:55:15.412992 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:55:15.412992 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:55:15.412992 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:55:15.420215 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:55:15.420375 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:55:15.424599 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 19:55:15.424775 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:55:15.448384 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1708286701" Feb 9 19:55:15.448384 ignition[850]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1708286701": device or resource busy Feb 9 19:55:15.448384 ignition[850]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1708286701", trying btrfs: device or resource busy Feb 9 19:55:15.448384 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1708286701" Feb 9 19:55:15.450337 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (855) Feb 9 19:55:15.450351 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1708286701" Feb 9 19:55:15.454580 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1708286701" Feb 9 19:55:15.454840 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1708286701" Feb 9 19:55:15.455036 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 19:55:15.455525 systemd[1]: mnt-oem1708286701.mount: Deactivated successfully. Feb 9 19:55:15.459295 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 19:55:15.459580 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 19:55:15.459789 ignition[850]: INFO : files: op(f): [started] processing unit "vmtoolsd.service" Feb 9 19:55:15.459930 ignition[850]: INFO : files: op(f): [finished] processing unit "vmtoolsd.service" Feb 9 19:55:15.460069 ignition[850]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 9 19:55:15.460231 ignition[850]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:55:15.460478 ignition[850]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:55:15.460681 ignition[850]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 9 19:55:15.460834 ignition[850]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Feb 9 19:55:15.460991 ignition[850]: INFO : files: op(12): op(13): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:55:15.461225 ignition[850]: INFO : files: op(12): op(13): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:55:15.461416 ignition[850]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Feb 9 19:55:15.461559 ignition[850]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:55:15.461717 ignition[850]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:55:15.461969 ignition[850]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:55:15.462166 ignition[850]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:55:15.462313 ignition[850]: INFO : files: op(16): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:55:15.462512 ignition[850]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:55:15.462665 ignition[850]: INFO : files: op(17): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:55:15.462829 ignition[850]: INFO : files: op(17): op(18): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:55:16.060034 ignition[850]: INFO : files: op(17): op(18): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:55:16.060315 ignition[850]: INFO : files: op(17): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:55:16.060498 ignition[850]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:55:16.060697 ignition[850]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:55:16.060901 ignition[850]: INFO : files: op(1a): [started] setting preset to enabled for "vmtoolsd.service" Feb 9 19:55:16.061091 ignition[850]: INFO : files: op(1a): [finished] setting preset to enabled for "vmtoolsd.service" Feb 9 19:55:16.061348 ignition[850]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:55:16.061618 ignition[850]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:55:16.061824 ignition[850]: INFO : files: files passed Feb 9 19:55:16.061964 ignition[850]: INFO : Ignition finished successfully Feb 9 19:55:16.063080 systemd[1]: Finished ignition-files.service. Feb 9 19:55:16.065984 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 9 19:55:16.066012 kernel: audit: type=1130 audit(1707508516.061:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.064230 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:55:16.066619 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:55:16.067080 systemd[1]: Starting ignition-quench.service... Feb 9 19:55:16.077782 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:55:16.077836 systemd[1]: Finished ignition-quench.service. Feb 9 19:55:16.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.082935 kernel: audit: type=1130 audit(1707508516.076:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.082962 kernel: audit: type=1131 audit(1707508516.076:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.083760 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:55:16.084326 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:55:16.086999 kernel: audit: type=1130 audit(1707508516.082:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.084489 systemd[1]: Reached target ignition-complete.target. Feb 9 19:55:16.087459 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:55:16.095228 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:55:16.095423 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:55:16.095692 systemd[1]: Reached target initrd-fs.target. Feb 9 19:55:16.095910 systemd[1]: Reached target initrd.target. Feb 9 19:55:16.096133 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:55:16.096689 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:55:16.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.101686 kernel: audit: type=1130 audit(1707508516.093:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.101703 kernel: audit: type=1131 audit(1707508516.093:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.102971 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:55:16.103410 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:55:16.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.106739 kernel: audit: type=1130 audit(1707508516.101:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.109245 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:55:16.109400 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:55:16.109575 systemd[1]: Stopped target timers.target. Feb 9 19:55:16.109800 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:55:16.109865 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:55:16.110101 systemd[1]: Stopped target initrd.target. Feb 9 19:55:16.112576 kernel: audit: type=1131 audit(1707508516.108:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.112532 systemd[1]: Stopped target basic.target. Feb 9 19:55:16.112663 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:55:16.112872 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:55:16.113046 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:55:16.113226 systemd[1]: Stopped target remote-fs.target. Feb 9 19:55:16.113396 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:55:16.113568 systemd[1]: Stopped target sysinit.target. Feb 9 19:55:16.113751 systemd[1]: Stopped target local-fs.target. Feb 9 19:55:16.113894 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:55:16.114258 systemd[1]: Stopped target swap.target. Feb 9 19:55:16.114385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:55:16.114449 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:55:16.117399 kernel: audit: type=1131 audit(1707508516.112:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.114630 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:55:16.117308 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:55:16.120096 kernel: audit: type=1131 audit(1707508516.115:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.117369 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:55:16.117524 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:55:16.117583 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:55:16.117765 systemd[1]: Stopped target paths.target. Feb 9 19:55:16.120184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:55:16.123837 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:55:16.124026 systemd[1]: Stopped target slices.target. Feb 9 19:55:16.124210 systemd[1]: Stopped target sockets.target. Feb 9 19:55:16.124393 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:55:16.124462 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:55:16.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.124624 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:55:16.124682 systemd[1]: Stopped ignition-files.service. Feb 9 19:55:16.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.125282 systemd[1]: Stopping ignition-mount.service... Feb 9 19:55:16.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.125485 systemd[1]: Stopping iscsid.service... Feb 9 19:55:16.126948 iscsid[741]: iscsid shutting down. Feb 9 19:55:16.126000 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:55:16.126090 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:55:16.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.126161 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:55:16.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.126329 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:55:16.135538 ignition[889]: INFO : Ignition 2.14.0 Feb 9 19:55:16.135538 ignition[889]: INFO : Stage: umount Feb 9 19:55:16.135538 ignition[889]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:16.135538 ignition[889]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:16.135538 ignition[889]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:16.126385 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:55:16.127489 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:55:16.127550 systemd[1]: Stopped iscsid.service. Feb 9 19:55:16.132131 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:55:16.132177 systemd[1]: Closed iscsid.socket. Feb 9 19:55:16.132352 systemd[1]: Stopping iscsiuio.service... Feb 9 19:55:16.133272 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:55:16.133316 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:55:16.134044 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:55:16.134086 systemd[1]: Stopped iscsiuio.service. Feb 9 19:55:16.134229 systemd[1]: Stopped target network.target. Feb 9 19:55:16.134311 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:55:16.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.134326 systemd[1]: Closed iscsiuio.socket. Feb 9 19:55:16.141000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:55:16.134440 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:55:16.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.134644 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:55:16.143485 ignition[889]: INFO : umount: umount passed Feb 9 19:55:16.143485 ignition[889]: INFO : Ignition finished successfully Feb 9 19:55:16.142000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:55:16.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.138250 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:55:16.138294 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:55:16.138444 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:55:16.138461 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:55:16.138891 systemd[1]: Stopping network-cleanup.service... Feb 9 19:55:16.138983 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:55:16.139012 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:55:16.139138 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 9 19:55:16.139157 systemd[1]: Stopped afterburn-network-kargs.service. Feb 9 19:55:16.139256 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:55:16.139275 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:55:16.140081 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:55:16.140104 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:55:16.140914 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:55:16.141585 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:55:16.141645 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:55:16.143054 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:55:16.143103 systemd[1]: Stopped network-cleanup.service. Feb 9 19:55:16.144068 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:55:16.144114 systemd[1]: Stopped ignition-mount.service. Feb 9 19:55:16.144243 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:55:16.144264 systemd[1]: Stopped ignition-disks.service. Feb 9 19:55:16.144386 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:55:16.144405 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:55:16.144769 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:55:16.144788 systemd[1]: Stopped ignition-setup.service. Feb 9 19:55:16.144927 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:55:16.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.149244 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:55:16.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.149306 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:55:16.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.149475 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:55:16.149493 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:55:16.149587 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:55:16.149602 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:55:16.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.149686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:55:16.149704 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:55:16.149954 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:55:16.149972 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:55:16.150093 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:55:16.150112 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:55:16.150610 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:55:16.150784 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:55:16.150809 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:55:16.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.154059 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:55:16.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.154084 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:55:16.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.154183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:55:16.154205 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:55:16.154839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:55:16.154882 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:55:16.155174 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:55:16.155216 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:55:16.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.165012 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:55:16.165059 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:55:16.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.165286 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:55:16.165386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:55:16.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:16.165408 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:55:16.165914 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:55:16.172340 systemd[1]: Switching root. Feb 9 19:55:16.187862 systemd-journald[217]: Journal stopped Feb 9 19:55:18.941564 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Feb 9 19:55:18.941583 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:55:18.941591 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:55:18.941597 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:55:18.941602 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:55:18.941609 kernel: SELinux: policy capability open_perms=1 Feb 9 19:55:18.941615 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:55:18.941620 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:55:18.941626 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:55:18.941631 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:55:18.941637 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:55:18.941642 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:55:18.941649 systemd[1]: Successfully loaded SELinux policy in 43.249ms. Feb 9 19:55:18.941656 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.606ms. Feb 9 19:55:18.941664 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:55:18.941671 systemd[1]: Detected virtualization vmware. Feb 9 19:55:18.941678 systemd[1]: Detected architecture x86-64. Feb 9 19:55:18.941684 systemd[1]: Detected first boot. Feb 9 19:55:18.941691 systemd[1]: Initializing machine ID from random generator. Feb 9 19:55:18.941697 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:55:18.941703 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:55:18.941710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:55:18.941717 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:55:18.943612 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:55:18.943629 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:55:18.943636 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:55:18.943643 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:55:18.943650 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:55:18.943657 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:55:18.943663 systemd[1]: Created slice system-getty.slice. Feb 9 19:55:18.943669 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:55:18.943682 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:55:18.943689 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:55:18.943695 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:55:18.943701 systemd[1]: Created slice user.slice. Feb 9 19:55:18.943708 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:55:18.943714 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:55:18.943736 systemd[1]: Set up automount boot.automount. Feb 9 19:55:18.943743 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:55:18.943751 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:55:18.943758 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:55:18.943766 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:55:18.943773 systemd[1]: Reached target integritysetup.target. Feb 9 19:55:18.943780 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:55:18.943787 systemd[1]: Reached target remote-fs.target. Feb 9 19:55:18.943793 systemd[1]: Reached target slices.target. Feb 9 19:55:18.943800 systemd[1]: Reached target swap.target. Feb 9 19:55:18.943806 systemd[1]: Reached target torcx.target. Feb 9 19:55:18.943814 systemd[1]: Reached target veritysetup.target. Feb 9 19:55:18.943821 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:55:18.943828 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:55:18.943834 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:55:18.943841 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:55:18.943848 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:55:18.943856 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:55:18.943862 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:55:18.943869 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:55:18.943876 systemd[1]: Mounting media.mount... Feb 9 19:55:18.943883 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:55:18.943890 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:55:18.943897 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:55:18.943904 systemd[1]: Mounting tmp.mount... Feb 9 19:55:18.943912 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:55:18.943919 systemd[1]: Starting ignition-delete-config.service... Feb 9 19:55:18.943929 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:55:18.943937 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:55:18.943943 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:55:18.943951 systemd[1]: Starting modprobe@drm.service... Feb 9 19:55:18.943958 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:55:18.943969 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:55:18.943982 systemd[1]: Starting modprobe@loop.service... Feb 9 19:55:18.943991 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:55:18.943998 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:55:18.944004 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:55:18.944011 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:55:18.944020 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:55:18.944030 systemd[1]: Stopped systemd-journald.service. Feb 9 19:55:18.944042 systemd[1]: Starting systemd-journald.service... Feb 9 19:55:18.944053 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:55:18.944063 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:55:18.944071 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:55:18.944078 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:55:18.944089 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:55:18.944099 systemd[1]: Stopped verity-setup.service. Feb 9 19:55:18.944110 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:55:18.944121 kernel: fuse: init (API version 7.34) Feb 9 19:55:18.944129 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:55:18.944136 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:55:18.944145 systemd[1]: Mounted media.mount. Feb 9 19:55:18.944152 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:55:18.944159 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:55:18.944166 systemd[1]: Mounted tmp.mount. Feb 9 19:55:18.944173 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:55:18.944179 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:55:18.944186 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:55:18.944193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:55:18.944200 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:55:18.944207 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:55:18.944215 systemd[1]: Finished modprobe@drm.service. Feb 9 19:55:18.944221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:55:18.944228 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:55:18.944235 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:55:18.944242 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:55:18.944248 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:55:18.944255 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:55:18.944262 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:55:18.944270 systemd[1]: Reached target network-pre.target. Feb 9 19:55:18.944277 kernel: loop: module loaded Feb 9 19:55:18.944283 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:55:18.944293 systemd-journald[1002]: Journal started Feb 9 19:55:18.944327 systemd-journald[1002]: Runtime Journal (/run/log/journal/fbcac54a02fb47f196a4fffc119fd3e9) is 4.8M, max 38.8M, 34.0M free. Feb 9 19:55:16.286000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:55:16.452000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:55:16.452000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:55:16.452000 audit: BPF prog-id=10 op=LOAD Feb 9 19:55:16.452000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:55:16.452000 audit: BPF prog-id=11 op=LOAD Feb 9 19:55:16.452000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:55:16.687000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:55:16.687000 audit[922]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:16.687000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:55:16.690000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:55:16.690000 audit[922]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b9 a2=1ed a3=0 items=2 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:16.690000 audit: CWD cwd="/" Feb 9 19:55:16.690000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:16.690000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:16.690000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:55:18.805000 audit: BPF prog-id=12 op=LOAD Feb 9 19:55:18.805000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:55:18.805000 audit: BPF prog-id=13 op=LOAD Feb 9 19:55:18.805000 audit: BPF prog-id=14 op=LOAD Feb 9 19:55:18.805000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:55:18.805000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:55:18.806000 audit: BPF prog-id=15 op=LOAD Feb 9 19:55:18.806000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:55:18.806000 audit: BPF prog-id=16 op=LOAD Feb 9 19:55:18.806000 audit: BPF prog-id=17 op=LOAD Feb 9 19:55:18.806000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:55:18.806000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:55:18.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.811000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:55:18.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.949830 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:55:18.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.884000 audit: BPF prog-id=18 op=LOAD Feb 9 19:55:18.884000 audit: BPF prog-id=19 op=LOAD Feb 9 19:55:18.884000 audit: BPF prog-id=20 op=LOAD Feb 9 19:55:18.885000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:55:18.885000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:55:18.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.937000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:55:18.937000 audit[1002]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fffb84bc8b0 a2=4000 a3=7fffb84bc94c items=0 ppid=1 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:18.937000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:55:18.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.806045 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:55:16.686937 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:55:18.809010 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:55:16.687449 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:55:16.687465 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:55:16.687489 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:55:16.687497 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:55:16.687520 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:55:16.687530 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:55:16.687688 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:55:18.952568 jq[989]: true Feb 9 19:55:18.952953 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:55:16.687716 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:55:16.687735 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:55:16.688608 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:55:16.688634 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:55:16.688648 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:55:16.688659 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:55:16.688671 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:55:16.688682 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:55:18.543155 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:18.543309 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:18.543372 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:18.543466 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:18.953998 jq[1015]: true Feb 9 19:55:18.543497 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:55:18.543537 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-02-09T19:55:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:55:18.958730 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:55:18.964440 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:55:18.965734 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:55:18.970734 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:55:18.971759 systemd[1]: Started systemd-journald.service. Feb 9 19:55:18.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.974922 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:55:18.975017 systemd[1]: Finished modprobe@loop.service. Feb 9 19:55:18.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.975227 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:55:18.975361 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:55:18.977783 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:55:18.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.978655 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:55:18.979496 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:55:18.979619 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:55:18.996814 systemd-journald[1002]: Time spent on flushing to /var/log/journal/fbcac54a02fb47f196a4fffc119fd3e9 is 31.648ms for 2026 entries. Feb 9 19:55:18.996814 systemd-journald[1002]: System Journal (/var/log/journal/fbcac54a02fb47f196a4fffc119fd3e9) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:55:19.101946 systemd-journald[1002]: Received client request to flush runtime journal. Feb 9 19:55:18.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:18.998454 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:55:18.999357 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:55:19.102514 udevadm[1047]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:55:19.026075 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:55:19.043436 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:55:19.044359 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:55:19.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.102625 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:55:19.120912 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:55:19.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.121903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:55:19.235432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:55:19.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.249365 ignition[1020]: Ignition 2.14.0 Feb 9 19:55:19.249590 ignition[1020]: deleting config from guestinfo properties Feb 9 19:55:19.251830 ignition[1020]: Successfully deleted config Feb 9 19:55:19.252698 systemd[1]: Finished ignition-delete-config.service. Feb 9 19:55:19.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.540176 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:55:19.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.539000 audit: BPF prog-id=21 op=LOAD Feb 9 19:55:19.539000 audit: BPF prog-id=22 op=LOAD Feb 9 19:55:19.539000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:55:19.539000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:55:19.541227 systemd[1]: Starting systemd-udevd.service... Feb 9 19:55:19.551824 systemd-udevd[1056]: Using default interface naming scheme 'v252'. Feb 9 19:55:19.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.574000 audit: BPF prog-id=23 op=LOAD Feb 9 19:55:19.574860 systemd[1]: Started systemd-udevd.service. Feb 9 19:55:19.576199 systemd[1]: Starting systemd-networkd.service... Feb 9 19:55:19.582000 audit: BPF prog-id=24 op=LOAD Feb 9 19:55:19.582000 audit: BPF prog-id=25 op=LOAD Feb 9 19:55:19.582000 audit: BPF prog-id=26 op=LOAD Feb 9 19:55:19.584493 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:55:19.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.606565 systemd[1]: Started systemd-userdbd.service. Feb 9 19:55:19.611984 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:55:19.645735 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:55:19.651050 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:55:19.696201 systemd-networkd[1065]: lo: Link UP Feb 9 19:55:19.696206 systemd-networkd[1065]: lo: Gained carrier Feb 9 19:55:19.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.696964 systemd-networkd[1065]: Enumeration completed Feb 9 19:55:19.697021 systemd[1]: Started systemd-networkd.service. Feb 9 19:55:19.697332 systemd-networkd[1065]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 9 19:55:19.700423 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 19:55:19.700550 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 19:55:19.701791 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Feb 9 19:55:19.702145 systemd-networkd[1065]: ens192: Link UP Feb 9 19:55:19.702266 systemd-networkd[1065]: ens192: Gained carrier Feb 9 19:55:19.702000 audit[1067]: AVC avc: denied { confidentiality } for pid=1067 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:55:19.717743 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1070) Feb 9 19:55:19.702000 audit[1067]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a43fa4c880 a1=32194 a2=7f2678633bc5 a3=5 items=108 ppid=1056 pid=1067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:19.702000 audit: CWD cwd="/" Feb 9 19:55:19.702000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=1 name=(null) inode=24827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=2 name=(null) inode=24827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=3 name=(null) inode=24828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=4 name=(null) inode=24827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=5 name=(null) inode=24829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=6 name=(null) inode=24827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=7 name=(null) inode=24830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=8 name=(null) inode=24830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=9 name=(null) inode=24831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=10 name=(null) inode=24830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=11 name=(null) inode=24832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=12 name=(null) inode=24830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=13 name=(null) inode=24833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=14 name=(null) inode=24830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=15 name=(null) inode=24834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=16 name=(null) inode=24830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=17 name=(null) inode=24835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=18 name=(null) inode=24827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=19 name=(null) inode=24836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=20 name=(null) inode=24836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=21 name=(null) inode=24837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=22 name=(null) inode=24836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=23 name=(null) inode=24838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=24 name=(null) inode=24836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=25 name=(null) inode=24839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=26 name=(null) inode=24836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=27 name=(null) inode=24840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=28 name=(null) inode=24836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=29 name=(null) inode=24841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=30 name=(null) inode=24827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=31 name=(null) inode=24842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=32 name=(null) inode=24842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=33 name=(null) inode=24843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=34 name=(null) inode=24842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=35 name=(null) inode=24844 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=36 name=(null) inode=24842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=37 name=(null) inode=24845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=38 name=(null) inode=24842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=39 name=(null) inode=24846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=40 name=(null) inode=24842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=41 name=(null) inode=24847 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=42 name=(null) inode=24827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=43 name=(null) inode=24848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=44 name=(null) inode=24848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=45 name=(null) inode=24849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=46 name=(null) inode=24848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=47 name=(null) inode=24850 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=48 name=(null) inode=24848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=49 name=(null) inode=24851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=50 name=(null) inode=24848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=51 name=(null) inode=24852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=52 name=(null) inode=24848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=53 name=(null) inode=24853 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=55 name=(null) inode=24854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=56 name=(null) inode=24854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=57 name=(null) inode=24855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=58 name=(null) inode=24854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=59 name=(null) inode=24856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=60 name=(null) inode=24854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=61 name=(null) inode=24857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=62 name=(null) inode=24857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=63 name=(null) inode=24858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=64 name=(null) inode=24857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=65 name=(null) inode=24859 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=66 name=(null) inode=24857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=67 name=(null) inode=24860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=68 name=(null) inode=24857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=69 name=(null) inode=24861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=70 name=(null) inode=24857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=71 name=(null) inode=24862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=72 name=(null) inode=24854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=73 name=(null) inode=24863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=74 name=(null) inode=24863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=75 name=(null) inode=24864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=76 name=(null) inode=24863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=77 name=(null) inode=24865 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=78 name=(null) inode=24863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=79 name=(null) inode=24866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=80 name=(null) inode=24863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=81 name=(null) inode=24867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=82 name=(null) inode=24863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=83 name=(null) inode=24868 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=84 name=(null) inode=24854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=85 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=86 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=87 name=(null) inode=24870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=88 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=89 name=(null) inode=24871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=90 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=91 name=(null) inode=24872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=92 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=93 name=(null) inode=24873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=94 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=95 name=(null) inode=24874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=96 name=(null) inode=24854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=97 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=98 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=99 name=(null) inode=24876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=100 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=101 name=(null) inode=24877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=102 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=103 name=(null) inode=24878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=104 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=105 name=(null) inode=24879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=106 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PATH item=107 name=(null) inode=24880 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:19.702000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:55:19.730732 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 9 19:55:19.746743 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Feb 9 19:55:19.746904 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 9 19:55:19.747031 kernel: Guest personality initialized and is active Feb 9 19:55:19.747048 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:55:19.749185 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 9 19:55:19.749214 kernel: Initialized host personality Feb 9 19:55:19.768732 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:55:19.771020 (udev-worker)[1061]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 9 19:55:19.773463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:55:19.786971 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:55:19.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.788059 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:55:19.836768 lvm[1089]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:55:19.862263 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:55:19.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.862458 systemd[1]: Reached target cryptsetup.target. Feb 9 19:55:19.863398 systemd[1]: Starting lvm2-activation.service... Feb 9 19:55:19.865806 lvm[1090]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:55:19.888243 systemd[1]: Finished lvm2-activation.service. Feb 9 19:55:19.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.888425 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:55:19.888522 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:55:19.888541 systemd[1]: Reached target local-fs.target. Feb 9 19:55:19.888639 systemd[1]: Reached target machines.target. Feb 9 19:55:19.889585 systemd[1]: Starting ldconfig.service... Feb 9 19:55:19.897429 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:55:19.897465 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:55:19.898309 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:55:19.899100 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:55:19.899918 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:55:19.900080 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:55:19.900111 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:55:19.900858 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:55:19.907952 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1092 (bootctl) Feb 9 19:55:19.908620 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:55:19.918677 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:55:19.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:19.922873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:55:19.935223 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:55:19.945502 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:55:20.400709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:55:20.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.401899 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:55:20.426368 systemd-fsck[1100]: fsck.fat 4.2 (2021-01-31) Feb 9 19:55:20.426368 systemd-fsck[1100]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:55:20.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.428951 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:55:20.429956 systemd[1]: Mounting boot.mount... Feb 9 19:55:20.441310 systemd[1]: Mounted boot.mount. Feb 9 19:55:20.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.451236 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:55:20.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.523220 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:55:20.524356 systemd[1]: Starting audit-rules.service... Feb 9 19:55:20.525312 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:55:20.525000 audit: BPF prog-id=27 op=LOAD Feb 9 19:55:20.526174 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:55:20.527475 systemd[1]: Starting systemd-resolved.service... Feb 9 19:55:20.526000 audit: BPF prog-id=28 op=LOAD Feb 9 19:55:20.528670 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:55:20.529846 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:55:20.540674 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:55:20.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.540923 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:55:20.539000 audit[1109]: SYSTEM_BOOT pid=1109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.545273 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:55:20.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.563078 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:55:20.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:20.568000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:55:20.568000 audit[1123]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc94819d20 a2=420 a3=0 items=0 ppid=1103 pid=1123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:20.568000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:55:20.570419 augenrules[1123]: No rules Feb 9 19:55:20.570805 systemd[1]: Finished audit-rules.service. Feb 9 19:55:20.591894 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:55:20.592074 systemd[1]: Reached target time-set.target. Feb 9 19:55:20.596334 systemd-resolved[1106]: Positive Trust Anchors: Feb 9 19:55:20.596343 systemd-resolved[1106]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:55:20.596362 systemd-resolved[1106]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:55:20.645894 systemd-resolved[1106]: Defaulting to hostname 'linux'. Feb 9 19:55:20.647233 systemd[1]: Started systemd-resolved.service. Feb 9 19:55:20.647423 systemd[1]: Reached target network.target. Feb 9 19:55:20.647536 systemd[1]: Reached target nss-lookup.target. Feb 9 19:55:20.734827 systemd-networkd[1065]: ens192: Gained IPv6LL Feb 9 19:56:04.830724 systemd-resolved[1106]: Clock change detected. Flushing caches. Feb 9 19:56:04.830929 systemd-timesyncd[1108]: Contacted time server 72.46.61.205:123 (0.flatcar.pool.ntp.org). Feb 9 19:56:04.831027 systemd-timesyncd[1108]: Initial clock synchronization to Fri 2024-02-09 19:56:04.830681 UTC. Feb 9 19:56:04.852491 ldconfig[1091]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:56:04.867004 systemd[1]: Finished ldconfig.service. Feb 9 19:56:04.868095 systemd[1]: Starting systemd-update-done.service... Feb 9 19:56:04.872189 systemd[1]: Finished systemd-update-done.service. Feb 9 19:56:04.872346 systemd[1]: Reached target sysinit.target. Feb 9 19:56:04.872489 systemd[1]: Started motdgen.path. Feb 9 19:56:04.872582 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:56:04.872757 systemd[1]: Started logrotate.timer. Feb 9 19:56:04.872873 systemd[1]: Started mdadm.timer. Feb 9 19:56:04.872957 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:56:04.873044 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:56:04.873061 systemd[1]: Reached target paths.target. Feb 9 19:56:04.873156 systemd[1]: Reached target timers.target. Feb 9 19:56:04.873390 systemd[1]: Listening on dbus.socket. Feb 9 19:56:04.874204 systemd[1]: Starting docker.socket... Feb 9 19:56:04.876411 systemd[1]: Listening on sshd.socket. Feb 9 19:56:04.876590 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:56:04.876831 systemd[1]: Listening on docker.socket. Feb 9 19:56:04.876951 systemd[1]: Reached target sockets.target. Feb 9 19:56:04.877036 systemd[1]: Reached target basic.target. Feb 9 19:56:04.877143 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:56:04.877155 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:56:04.877950 systemd[1]: Starting containerd.service... Feb 9 19:56:04.878714 systemd[1]: Starting dbus.service... Feb 9 19:56:04.879932 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:56:04.882041 jq[1134]: false Feb 9 19:56:04.882621 systemd[1]: Starting extend-filesystems.service... Feb 9 19:56:04.882767 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:56:04.883466 systemd[1]: Starting motdgen.service... Feb 9 19:56:04.884337 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:56:04.885079 systemd[1]: Starting prepare-critools.service... Feb 9 19:56:04.886068 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:56:04.887211 systemd[1]: Starting sshd-keygen.service... Feb 9 19:56:04.889853 systemd[1]: Starting systemd-logind.service... Feb 9 19:56:04.889961 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:56:04.889987 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:56:04.890410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:56:04.890909 systemd[1]: Starting update-engine.service... Feb 9 19:56:04.892986 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:56:04.893853 systemd[1]: Starting vmtoolsd.service... Feb 9 19:56:04.897894 jq[1146]: true Feb 9 19:56:04.894794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:56:04.894883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:56:04.903053 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:56:04.903150 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:56:04.907680 jq[1153]: true Feb 9 19:56:04.910820 systemd[1]: Started vmtoolsd.service. Feb 9 19:56:04.924971 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:56:04.925099 systemd[1]: Finished motdgen.service. Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda1 Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda2 Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda3 Feb 9 19:56:04.927935 extend-filesystems[1135]: Found usr Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda4 Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda6 Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda7 Feb 9 19:56:04.927935 extend-filesystems[1135]: Found sda9 Feb 9 19:56:04.927935 extend-filesystems[1135]: Checking size of /dev/sda9 Feb 9 19:56:04.942364 env[1156]: time="2024-02-09T19:56:04.942324368Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:56:04.955983 tar[1149]: ./ Feb 9 19:56:04.955983 tar[1149]: ./macvlan Feb 9 19:56:04.955870 systemd-logind[1141]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:56:04.955881 systemd-logind[1141]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:56:04.956014 systemd-logind[1141]: New seat seat0. Feb 9 19:56:04.960398 tar[1150]: crictl Feb 9 19:56:04.964414 extend-filesystems[1135]: Old size kept for /dev/sda9 Feb 9 19:56:04.964414 extend-filesystems[1135]: Found sr0 Feb 9 19:56:04.964560 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:56:04.964648 systemd[1]: Finished extend-filesystems.service. Feb 9 19:56:04.970216 bash[1177]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:56:04.970728 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:56:04.981390 dbus-daemon[1133]: [system] SELinux support is enabled Feb 9 19:56:04.981669 systemd[1]: Started dbus.service. Feb 9 19:56:04.982934 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:56:04.982950 systemd[1]: Reached target system-config.target. Feb 9 19:56:04.983064 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:56:04.983073 systemd[1]: Reached target user-config.target. Feb 9 19:56:04.986181 systemd[1]: Started systemd-logind.service. Feb 9 19:56:04.992842 kernel: NET: Registered PF_VSOCK protocol family Feb 9 19:56:04.992911 update_engine[1144]: I0209 19:56:04.992393 1144 main.cc:92] Flatcar Update Engine starting Feb 9 19:56:04.998272 systemd[1]: Started update-engine.service. Feb 9 19:56:05.000031 systemd[1]: Started locksmithd.service. Feb 9 19:56:05.000768 update_engine[1144]: I0209 19:56:05.000747 1144 update_check_scheduler.cc:74] Next update check in 8m14s Feb 9 19:56:05.003878 env[1156]: time="2024-02-09T19:56:05.003791017Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:56:05.003930 env[1156]: time="2024-02-09T19:56:05.003894140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:05.005865 env[1156]: time="2024-02-09T19:56:05.005845539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:56:05.005903 env[1156]: time="2024-02-09T19:56:05.005862561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:05.006150 env[1156]: time="2024-02-09T19:56:05.006122812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:56:05.006150 env[1156]: time="2024-02-09T19:56:05.006147489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:05.006195 env[1156]: time="2024-02-09T19:56:05.006156142Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:56:05.006195 env[1156]: time="2024-02-09T19:56:05.006161931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:05.006228 env[1156]: time="2024-02-09T19:56:05.006218511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:05.006443 env[1156]: time="2024-02-09T19:56:05.006386247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:05.006500 env[1156]: time="2024-02-09T19:56:05.006471083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:56:05.006500 env[1156]: time="2024-02-09T19:56:05.006489259Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:56:05.006559 env[1156]: time="2024-02-09T19:56:05.006530157Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:56:05.006559 env[1156]: time="2024-02-09T19:56:05.006538750Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:56:05.010994 env[1156]: time="2024-02-09T19:56:05.010936686Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:56:05.010994 env[1156]: time="2024-02-09T19:56:05.010978117Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:56:05.010994 env[1156]: time="2024-02-09T19:56:05.010990839Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:56:05.011084 env[1156]: time="2024-02-09T19:56:05.011021738Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011084 env[1156]: time="2024-02-09T19:56:05.011036192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011084 env[1156]: time="2024-02-09T19:56:05.011049272Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011084 env[1156]: time="2024-02-09T19:56:05.011056991Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011084 env[1156]: time="2024-02-09T19:56:05.011064445Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011084 env[1156]: time="2024-02-09T19:56:05.011071287Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011084 env[1156]: time="2024-02-09T19:56:05.011078592Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011196 env[1156]: time="2024-02-09T19:56:05.011086470Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011196 env[1156]: time="2024-02-09T19:56:05.011093619Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:56:05.011196 env[1156]: time="2024-02-09T19:56:05.011151635Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:56:05.011273 env[1156]: time="2024-02-09T19:56:05.011209985Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:56:05.011463 env[1156]: time="2024-02-09T19:56:05.011347100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:56:05.011463 env[1156]: time="2024-02-09T19:56:05.011365426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.011463 env[1156]: time="2024-02-09T19:56:05.011373279Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:56:05.011463 env[1156]: time="2024-02-09T19:56:05.011401209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.011463 env[1156]: time="2024-02-09T19:56:05.011408345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.011463 env[1156]: time="2024-02-09T19:56:05.011415315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.011463 env[1156]: time="2024-02-09T19:56:05.011421492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015495 env[1156]: time="2024-02-09T19:56:05.015462055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015495 env[1156]: time="2024-02-09T19:56:05.015484118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015495 env[1156]: time="2024-02-09T19:56:05.015495020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015585 env[1156]: time="2024-02-09T19:56:05.015502227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015585 env[1156]: time="2024-02-09T19:56:05.015513512Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:56:05.015707 env[1156]: time="2024-02-09T19:56:05.015690150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015707 env[1156]: time="2024-02-09T19:56:05.015704319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015756 env[1156]: time="2024-02-09T19:56:05.015713319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015756 env[1156]: time="2024-02-09T19:56:05.015720144Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:56:05.015756 env[1156]: time="2024-02-09T19:56:05.015729409Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:56:05.015756 env[1156]: time="2024-02-09T19:56:05.015735720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:56:05.015756 env[1156]: time="2024-02-09T19:56:05.015746204Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:56:05.015847 env[1156]: time="2024-02-09T19:56:05.015769372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:56:05.015927 env[1156]: time="2024-02-09T19:56:05.015894180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:56:05.017630 env[1156]: time="2024-02-09T19:56:05.015931066Z" level=info msg="Connect containerd service" Feb 9 19:56:05.017630 env[1156]: time="2024-02-09T19:56:05.015961451Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:56:05.021790 env[1156]: time="2024-02-09T19:56:05.021465453Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:56:05.021790 env[1156]: time="2024-02-09T19:56:05.021659500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:56:05.021790 env[1156]: time="2024-02-09T19:56:05.021687102Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:56:05.021755 systemd[1]: Started containerd.service. Feb 9 19:56:05.025129 env[1156]: time="2024-02-09T19:56:05.025108160Z" level=info msg="containerd successfully booted in 0.084441s" Feb 9 19:56:05.026444 env[1156]: time="2024-02-09T19:56:05.025825259Z" level=info msg="Start subscribing containerd event" Feb 9 19:56:05.026444 env[1156]: time="2024-02-09T19:56:05.025930407Z" level=info msg="Start recovering state" Feb 9 19:56:05.026444 env[1156]: time="2024-02-09T19:56:05.025991068Z" level=info msg="Start event monitor" Feb 9 19:56:05.026444 env[1156]: time="2024-02-09T19:56:05.026004537Z" level=info msg="Start snapshots syncer" Feb 9 19:56:05.026444 env[1156]: time="2024-02-09T19:56:05.026043760Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:56:05.026444 env[1156]: time="2024-02-09T19:56:05.026051258Z" level=info msg="Start streaming server" Feb 9 19:56:05.062940 tar[1149]: ./static Feb 9 19:56:05.095946 tar[1149]: ./vlan Feb 9 19:56:05.141879 tar[1149]: ./portmap Feb 9 19:56:05.189850 tar[1149]: ./host-local Feb 9 19:56:05.217858 tar[1149]: ./vrf Feb 9 19:56:05.256367 tar[1149]: ./bridge Feb 9 19:56:05.298007 tar[1149]: ./tuning Feb 9 19:56:05.335670 tar[1149]: ./firewall Feb 9 19:56:05.384808 tar[1149]: ./host-device Feb 9 19:56:05.428117 tar[1149]: ./sbr Feb 9 19:56:05.463763 tar[1149]: ./loopback Feb 9 19:56:05.481644 sshd_keygen[1154]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:56:05.496980 tar[1149]: ./dhcp Feb 9 19:56:05.498416 systemd[1]: Finished sshd-keygen.service. Feb 9 19:56:05.499626 systemd[1]: Starting issuegen.service... Feb 9 19:56:05.503035 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:56:05.503130 systemd[1]: Finished issuegen.service. Feb 9 19:56:05.504259 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:56:05.517313 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:56:05.518404 systemd[1]: Started getty@tty1.service. Feb 9 19:56:05.519392 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:56:05.519599 systemd[1]: Reached target getty.target. Feb 9 19:56:05.536919 systemd[1]: Finished prepare-critools.service. Feb 9 19:56:05.574973 tar[1149]: ./ptp Feb 9 19:56:05.597142 tar[1149]: ./ipvlan Feb 9 19:56:05.618262 tar[1149]: ./bandwidth Feb 9 19:56:05.634079 locksmithd[1196]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:56:05.644557 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:56:05.644856 systemd[1]: Reached target multi-user.target. Feb 9 19:56:05.645863 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:56:05.650067 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:56:05.650166 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:56:05.650344 systemd[1]: Startup finished in 871ms (kernel) + 10.687s (initrd) + 5.411s (userspace) = 16.969s. Feb 9 19:56:05.671219 login[1256]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 19:56:05.672906 login[1257]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:56:05.679600 systemd[1]: Created slice user-500.slice. Feb 9 19:56:05.680408 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:56:05.683180 systemd-logind[1141]: New session 1 of user core. Feb 9 19:56:05.685835 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:56:05.686804 systemd[1]: Starting user@500.service... Feb 9 19:56:05.688866 (systemd)[1268]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:56:05.757588 systemd[1268]: Queued start job for default target default.target. Feb 9 19:56:05.758266 systemd[1268]: Reached target paths.target. Feb 9 19:56:05.758366 systemd[1268]: Reached target sockets.target. Feb 9 19:56:05.758464 systemd[1268]: Reached target timers.target. Feb 9 19:56:05.758545 systemd[1268]: Reached target basic.target. Feb 9 19:56:05.758648 systemd[1268]: Reached target default.target. Feb 9 19:56:05.758692 systemd[1]: Started user@500.service. Feb 9 19:56:05.758776 systemd[1268]: Startup finished in 66ms. Feb 9 19:56:05.759816 systemd[1]: Started session-1.scope. Feb 9 19:56:06.671511 login[1256]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:56:06.673863 systemd-logind[1141]: New session 2 of user core. Feb 9 19:56:06.674573 systemd[1]: Started session-2.scope. Feb 9 19:56:45.043449 systemd[1]: Created slice system-sshd.slice. Feb 9 19:56:45.044350 systemd[1]: Started sshd@0-139.178.70.104:22-139.178.89.65:42874.service. Feb 9 19:56:45.120868 sshd[1289]: Accepted publickey for core from 139.178.89.65 port 42874 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:56:45.121776 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:56:45.125693 systemd[1]: Started session-3.scope. Feb 9 19:56:45.125995 systemd-logind[1141]: New session 3 of user core. Feb 9 19:56:45.173653 systemd[1]: Started sshd@1-139.178.70.104:22-139.178.89.65:42876.service. Feb 9 19:56:45.200322 sshd[1294]: Accepted publickey for core from 139.178.89.65 port 42876 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:56:45.201171 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:56:45.204229 systemd[1]: Started session-4.scope. Feb 9 19:56:45.204460 systemd-logind[1141]: New session 4 of user core. Feb 9 19:56:45.253798 sshd[1294]: pam_unix(sshd:session): session closed for user core Feb 9 19:56:45.255513 systemd[1]: Started sshd@2-139.178.70.104:22-139.178.89.65:42880.service. Feb 9 19:56:45.257300 systemd-logind[1141]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:56:45.257441 systemd[1]: sshd@1-139.178.70.104:22-139.178.89.65:42876.service: Deactivated successfully. Feb 9 19:56:45.257801 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:56:45.258739 systemd-logind[1141]: Removed session 4. Feb 9 19:56:45.283599 sshd[1299]: Accepted publickey for core from 139.178.89.65 port 42880 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:56:45.284515 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:56:45.287224 systemd[1]: Started session-5.scope. Feb 9 19:56:45.287573 systemd-logind[1141]: New session 5 of user core. Feb 9 19:56:45.334552 sshd[1299]: pam_unix(sshd:session): session closed for user core Feb 9 19:56:45.336703 systemd[1]: Started sshd@3-139.178.70.104:22-139.178.89.65:42884.service. Feb 9 19:56:45.338584 systemd[1]: sshd@2-139.178.70.104:22-139.178.89.65:42880.service: Deactivated successfully. Feb 9 19:56:45.338979 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:56:45.339602 systemd-logind[1141]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:56:45.340078 systemd-logind[1141]: Removed session 5. Feb 9 19:56:45.366920 sshd[1305]: Accepted publickey for core from 139.178.89.65 port 42884 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:56:45.367659 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:56:45.370483 systemd-logind[1141]: New session 6 of user core. Feb 9 19:56:45.370564 systemd[1]: Started session-6.scope. Feb 9 19:56:45.419693 sshd[1305]: pam_unix(sshd:session): session closed for user core Feb 9 19:56:45.421550 systemd[1]: sshd@3-139.178.70.104:22-139.178.89.65:42884.service: Deactivated successfully. Feb 9 19:56:45.421854 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:56:45.422307 systemd-logind[1141]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:56:45.422880 systemd[1]: Started sshd@4-139.178.70.104:22-139.178.89.65:42886.service. Feb 9 19:56:45.423423 systemd-logind[1141]: Removed session 6. Feb 9 19:56:45.448601 sshd[1312]: Accepted publickey for core from 139.178.89.65 port 42886 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:56:45.449488 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:56:45.451823 systemd-logind[1141]: New session 7 of user core. Feb 9 19:56:45.452256 systemd[1]: Started session-7.scope. Feb 9 19:56:45.630299 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:56:45.631073 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:56:46.174674 systemd[1]: Reloading. Feb 9 19:56:46.208221 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2024-02-09T19:56:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:56:46.208238 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2024-02-09T19:56:46Z" level=info msg="torcx already run" Feb 9 19:56:46.270591 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:56:46.270608 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:56:46.285959 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:56:46.334692 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:56:46.349059 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:56:46.349522 systemd[1]: Reached target network-online.target. Feb 9 19:56:46.350687 systemd[1]: Started kubelet.service. Feb 9 19:56:46.355552 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 9 19:56:46.358364 systemd[1]: Starting coreos-metadata.service... Feb 9 19:56:46.387004 kubelet[1404]: E0209 19:56:46.386962 1404 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:56:46.388265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:56:46.388338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:56:46.408096 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 19:56:46.408206 systemd[1]: Finished coreos-metadata.service. Feb 9 19:56:47.078743 systemd[1]: Stopped kubelet.service. Feb 9 19:56:47.088889 systemd[1]: Reloading. Feb 9 19:56:47.144779 /usr/lib/systemd/system-generators/torcx-generator[1476]: time="2024-02-09T19:56:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:56:47.144801 /usr/lib/systemd/system-generators/torcx-generator[1476]: time="2024-02-09T19:56:47Z" level=info msg="torcx already run" Feb 9 19:56:47.196320 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:56:47.196550 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:56:47.211426 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:56:47.262927 systemd[1]: Started kubelet.service. Feb 9 19:56:47.289209 kubelet[1535]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:56:47.289209 kubelet[1535]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:56:47.289451 kubelet[1535]: I0209 19:56:47.289244 1535 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:56:47.290151 kubelet[1535]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:56:47.290151 kubelet[1535]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:56:47.773777 kubelet[1535]: I0209 19:56:47.773760 1535 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:56:47.773881 kubelet[1535]: I0209 19:56:47.773873 1535 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:56:47.774097 kubelet[1535]: I0209 19:56:47.774089 1535 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:56:47.775447 kubelet[1535]: I0209 19:56:47.775419 1535 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:56:47.777291 kubelet[1535]: I0209 19:56:47.777279 1535 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:56:47.777497 kubelet[1535]: I0209 19:56:47.777489 1535 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:56:47.777580 kubelet[1535]: I0209 19:56:47.777572 1535 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:56:47.777672 kubelet[1535]: I0209 19:56:47.777664 1535 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:56:47.777720 kubelet[1535]: I0209 19:56:47.777713 1535 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:56:47.777829 kubelet[1535]: I0209 19:56:47.777821 1535 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:56:47.781154 kubelet[1535]: I0209 19:56:47.781143 1535 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:56:47.781223 kubelet[1535]: I0209 19:56:47.781215 1535 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:56:47.781273 kubelet[1535]: I0209 19:56:47.781266 1535 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:56:47.781325 kubelet[1535]: I0209 19:56:47.781318 1535 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:56:47.781764 kubelet[1535]: E0209 19:56:47.781658 1535 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:47.781764 kubelet[1535]: E0209 19:56:47.781722 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:47.781885 kubelet[1535]: I0209 19:56:47.781877 1535 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:56:47.782153 kubelet[1535]: W0209 19:56:47.782146 1535 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:56:47.782417 kubelet[1535]: I0209 19:56:47.782410 1535 server.go:1186] "Started kubelet" Feb 9 19:56:47.783056 kubelet[1535]: I0209 19:56:47.783036 1535 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:56:47.792946 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:56:47.792991 kubelet[1535]: I0209 19:56:47.783597 1535 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:56:47.793077 kubelet[1535]: I0209 19:56:47.793069 1535 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:56:47.793743 kubelet[1535]: E0209 19:56:47.793718 1535 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:56:47.793743 kubelet[1535]: E0209 19:56:47.793738 1535 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:56:47.794950 kubelet[1535]: E0209 19:56:47.794879 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a0831caa452", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 782397010, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 782397010, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.795118 kubelet[1535]: W0209 19:56:47.795104 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:47.795147 kubelet[1535]: E0209 19:56:47.795127 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:47.795166 kubelet[1535]: W0209 19:56:47.795151 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:47.795166 kubelet[1535]: E0209 19:56:47.795159 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:47.798398 kubelet[1535]: E0209 19:56:47.798358 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a0832779264", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 793730148, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 793730148, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.798497 kubelet[1535]: I0209 19:56:47.798470 1535 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:56:47.798523 kubelet[1535]: I0209 19:56:47.798508 1535 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:56:47.798953 kubelet[1535]: W0209 19:56:47.798931 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:47.798953 kubelet[1535]: E0209 19:56:47.798944 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:47.799279 kubelet[1535]: E0209 19:56:47.799261 1535 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.67.124.136" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:56:47.807371 kubelet[1535]: I0209 19:56:47.807348 1535 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:56:47.807371 kubelet[1535]: I0209 19:56:47.807358 1535 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:56:47.807371 kubelet[1535]: I0209 19:56:47.807366 1535 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:56:47.807622 kubelet[1535]: E0209 19:56:47.807584 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341a6d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.136 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.808124 kubelet[1535]: E0209 19:56:47.808094 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341caba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.136 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.808462 kubelet[1535]: E0209 19:56:47.808426 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341d01c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.136 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.827520 kubelet[1535]: I0209 19:56:47.827495 1535 policy_none.go:49] "None policy: Start" Feb 9 19:56:47.828045 kubelet[1535]: I0209 19:56:47.828030 1535 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:56:47.828108 kubelet[1535]: I0209 19:56:47.828071 1535 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:56:47.850367 systemd[1]: Created slice kubepods.slice. Feb 9 19:56:47.855033 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:56:47.858182 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:56:47.863025 kubelet[1535]: I0209 19:56:47.863011 1535 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:56:47.864854 kubelet[1535]: E0209 19:56:47.864799 1535 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.136\" not found" Feb 9 19:56:47.864932 kubelet[1535]: E0209 19:56:47.864850 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a0836a699f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 863921145, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 863921145, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.865101 kubelet[1535]: I0209 19:56:47.865082 1535 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:56:47.899090 kubelet[1535]: I0209 19:56:47.899070 1535 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.136" Feb 9 19:56:47.899935 kubelet[1535]: E0209 19:56:47.899892 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341a6d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.136 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 899029347, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341a6d9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.900064 kubelet[1535]: E0209 19:56:47.900055 1535 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.136" Feb 9 19:56:47.900383 kubelet[1535]: E0209 19:56:47.900345 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341caba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.136 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 899040190, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341caba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:47.900920 kubelet[1535]: E0209 19:56:47.900891 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341d01c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.136 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 899043415, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341d01c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:48.000952 kubelet[1535]: E0209 19:56:48.000934 1535 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.67.124.136" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:56:48.049198 kubelet[1535]: I0209 19:56:48.049181 1535 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:56:48.069003 kubelet[1535]: I0209 19:56:48.068982 1535 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:56:48.069003 kubelet[1535]: I0209 19:56:48.069006 1535 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:56:48.069100 kubelet[1535]: I0209 19:56:48.069021 1535 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:56:48.069100 kubelet[1535]: E0209 19:56:48.069053 1535 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:56:48.069955 kubelet[1535]: W0209 19:56:48.069945 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:48.070018 kubelet[1535]: E0209 19:56:48.070009 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:48.100981 kubelet[1535]: I0209 19:56:48.100962 1535 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.136" Feb 9 19:56:48.101861 kubelet[1535]: E0209 19:56:48.101844 1535 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.136" Feb 9 19:56:48.102057 kubelet[1535]: E0209 19:56:48.101955 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341a6d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.136 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 48, 100940862, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341a6d9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:48.102871 kubelet[1535]: E0209 19:56:48.102821 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341caba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.136 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 48, 100944008, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341caba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:48.184265 kubelet[1535]: E0209 19:56:48.184191 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341d01c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.136 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 48, 100945364, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341d01c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:48.402580 kubelet[1535]: E0209 19:56:48.402495 1535 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.67.124.136" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:56:48.503288 kubelet[1535]: I0209 19:56:48.503264 1535 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.136" Feb 9 19:56:48.504226 kubelet[1535]: E0209 19:56:48.504174 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341a6d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.136 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 48, 503238592, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341a6d9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:48.504413 kubelet[1535]: E0209 19:56:48.504232 1535 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.136" Feb 9 19:56:48.584613 kubelet[1535]: E0209 19:56:48.584539 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341caba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.136 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 48, 503242362, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341caba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:48.750420 kubelet[1535]: W0209 19:56:48.750340 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:48.750420 kubelet[1535]: E0209 19:56:48.750369 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:48.782869 kubelet[1535]: E0209 19:56:48.782847 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:48.783776 kubelet[1535]: E0209 19:56:48.783690 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341d01c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.136 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 48, 503245441, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341d01c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:48.784297 kubelet[1535]: W0209 19:56:48.784279 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:48.784389 kubelet[1535]: E0209 19:56:48.784379 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:48.928504 kubelet[1535]: W0209 19:56:48.928474 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:48.928504 kubelet[1535]: E0209 19:56:48.928502 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:48.989373 kubelet[1535]: W0209 19:56:48.989351 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:48.989373 kubelet[1535]: E0209 19:56:48.989372 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:49.204079 kubelet[1535]: E0209 19:56:49.204057 1535 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.67.124.136" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:56:49.305173 kubelet[1535]: I0209 19:56:49.305147 1535 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.136" Feb 9 19:56:49.306010 kubelet[1535]: E0209 19:56:49.305995 1535 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.136" Feb 9 19:56:49.306096 kubelet[1535]: E0209 19:56:49.305992 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341a6d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.136 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 49, 305119096, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341a6d9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:49.306682 kubelet[1535]: E0209 19:56:49.306641 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341caba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.136 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 49, 305126192, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341caba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:49.383647 kubelet[1535]: E0209 19:56:49.383551 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341d01c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.136 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 49, 305128195, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341d01c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:49.783601 kubelet[1535]: E0209 19:56:49.783577 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:50.683507 update_engine[1144]: I0209 19:56:50.683460 1144 update_attempter.cc:509] Updating boot flags... Feb 9 19:56:50.784645 kubelet[1535]: E0209 19:56:50.784620 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:50.805160 kubelet[1535]: E0209 19:56:50.805140 1535 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.67.124.136" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:56:50.908458 kubelet[1535]: I0209 19:56:50.908232 1535 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.136" Feb 9 19:56:50.909380 kubelet[1535]: E0209 19:56:50.909174 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341a6d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.136 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 50, 908206553, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341a6d9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:50.909380 kubelet[1535]: E0209 19:56:50.909367 1535 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.136" Feb 9 19:56:50.909873 kubelet[1535]: E0209 19:56:50.909692 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341caba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.136 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 50, 908211550, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341caba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:50.910245 kubelet[1535]: E0209 19:56:50.910204 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341d01c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.136 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 50, 908213972, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341d01c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:51.237477 kubelet[1535]: W0209 19:56:51.237444 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:51.237477 kubelet[1535]: E0209 19:56:51.237472 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:51.246115 kubelet[1535]: W0209 19:56:51.246084 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:51.246115 kubelet[1535]: E0209 19:56:51.246103 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:51.784788 kubelet[1535]: E0209 19:56:51.784747 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:51.814621 kubelet[1535]: W0209 19:56:51.814596 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:51.814621 kubelet[1535]: E0209 19:56:51.814617 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:52.036916 kubelet[1535]: W0209 19:56:52.036831 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:52.036916 kubelet[1535]: E0209 19:56:52.036858 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:52.785651 kubelet[1535]: E0209 19:56:52.785618 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:53.786002 kubelet[1535]: E0209 19:56:53.785963 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:54.006209 kubelet[1535]: E0209 19:56:54.006186 1535 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.67.124.136" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:56:54.110201 kubelet[1535]: I0209 19:56:54.110146 1535 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.136" Feb 9 19:56:54.110961 kubelet[1535]: E0209 19:56:54.110946 1535 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.136" Feb 9 19:56:54.111103 kubelet[1535]: E0209 19:56:54.111038 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341a6d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.136 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806973657, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 54, 110120177, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341a6d9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:54.111715 kubelet[1535]: E0209 19:56:54.111669 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341caba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.136 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806982842, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 54, 110123694, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341caba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:54.112444 kubelet[1535]: E0209 19:56:54.112393 1535 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.136.17b24a083341d01c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.136", UID:"10.67.124.136", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.136 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.136"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 56, 47, 806984220, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 56, 54, 110125317, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.136.17b24a083341d01c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:56:54.786574 kubelet[1535]: E0209 19:56:54.786552 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:55.076296 kubelet[1535]: W0209 19:56:55.076265 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:55.076473 kubelet[1535]: E0209 19:56:55.076464 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:56:55.787200 kubelet[1535]: E0209 19:56:55.787170 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:55.831723 kubelet[1535]: W0209 19:56:55.831704 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:55.831723 kubelet[1535]: E0209 19:56:55.831726 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:56:56.149434 kubelet[1535]: W0209 19:56:56.149407 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:56.149434 kubelet[1535]: E0209 19:56:56.149435 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:56:56.787968 kubelet[1535]: E0209 19:56:56.787894 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:56.834519 kubelet[1535]: W0209 19:56:56.834501 1535 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:56.834607 kubelet[1535]: E0209 19:56:56.834599 1535 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:56:57.775416 kubelet[1535]: I0209 19:56:57.775384 1535 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:56:57.788739 kubelet[1535]: E0209 19:56:57.788725 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:57.865458 kubelet[1535]: E0209 19:56:57.865416 1535 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.136\" not found" Feb 9 19:56:58.130801 kubelet[1535]: E0209 19:56:58.130779 1535 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.124.136" not found Feb 9 19:56:58.789439 kubelet[1535]: E0209 19:56:58.789411 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:56:59.198698 kubelet[1535]: E0209 19:56:59.198679 1535 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.124.136" not found Feb 9 19:56:59.789730 kubelet[1535]: E0209 19:56:59.789707 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:00.409521 kubelet[1535]: E0209 19:57:00.409494 1535 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.124.136\" not found" node="10.67.124.136" Feb 9 19:57:00.511607 kubelet[1535]: I0209 19:57:00.511588 1535 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.136" Feb 9 19:57:00.598672 kubelet[1535]: I0209 19:57:00.598654 1535 kubelet_node_status.go:73] "Successfully registered node" node="10.67.124.136" Feb 9 19:57:00.602491 kubelet[1535]: E0209 19:57:00.602476 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:00.703154 kubelet[1535]: E0209 19:57:00.703049 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:00.771526 sudo[1315]: pam_unix(sudo:session): session closed for user root Feb 9 19:57:00.773166 sshd[1312]: pam_unix(sshd:session): session closed for user core Feb 9 19:57:00.774817 systemd[1]: sshd@4-139.178.70.104:22-139.178.89.65:42886.service: Deactivated successfully. Feb 9 19:57:00.775326 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:57:00.775725 systemd-logind[1141]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:57:00.776371 systemd-logind[1141]: Removed session 7. Feb 9 19:57:00.790735 kubelet[1535]: E0209 19:57:00.790712 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:00.804115 kubelet[1535]: E0209 19:57:00.804098 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:00.904849 kubelet[1535]: E0209 19:57:00.904821 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.005906 kubelet[1535]: E0209 19:57:01.005369 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.107009 kubelet[1535]: E0209 19:57:01.106990 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.207580 kubelet[1535]: E0209 19:57:01.207547 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.308321 kubelet[1535]: E0209 19:57:01.308285 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.408814 kubelet[1535]: E0209 19:57:01.408790 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.509337 kubelet[1535]: E0209 19:57:01.509310 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.609877 kubelet[1535]: E0209 19:57:01.609796 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.710336 kubelet[1535]: E0209 19:57:01.710311 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.790971 kubelet[1535]: E0209 19:57:01.790947 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:01.811314 kubelet[1535]: E0209 19:57:01.811283 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:01.912044 kubelet[1535]: E0209 19:57:01.911969 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.012527 kubelet[1535]: E0209 19:57:02.012500 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.112694 kubelet[1535]: E0209 19:57:02.112675 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.213501 kubelet[1535]: E0209 19:57:02.213273 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.314324 kubelet[1535]: E0209 19:57:02.314297 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.414767 kubelet[1535]: E0209 19:57:02.414742 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.515564 kubelet[1535]: E0209 19:57:02.515292 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.615771 kubelet[1535]: E0209 19:57:02.615745 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.716555 kubelet[1535]: E0209 19:57:02.716535 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.792597 kubelet[1535]: E0209 19:57:02.792249 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:02.817370 kubelet[1535]: E0209 19:57:02.817345 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:02.918628 kubelet[1535]: E0209 19:57:02.918599 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.019101 kubelet[1535]: E0209 19:57:03.019077 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.119275 kubelet[1535]: E0209 19:57:03.119240 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.219779 kubelet[1535]: E0209 19:57:03.219744 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.320408 kubelet[1535]: E0209 19:57:03.320383 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.421165 kubelet[1535]: E0209 19:57:03.421074 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.521666 kubelet[1535]: E0209 19:57:03.521641 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.622377 kubelet[1535]: E0209 19:57:03.622347 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.723088 kubelet[1535]: E0209 19:57:03.723004 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.792573 kubelet[1535]: E0209 19:57:03.792542 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:03.823917 kubelet[1535]: E0209 19:57:03.823891 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:03.924611 kubelet[1535]: E0209 19:57:03.924585 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.025245 kubelet[1535]: E0209 19:57:04.025168 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.125764 kubelet[1535]: E0209 19:57:04.125734 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.226313 kubelet[1535]: E0209 19:57:04.226283 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.327024 kubelet[1535]: E0209 19:57:04.326992 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.427569 kubelet[1535]: E0209 19:57:04.427538 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.528102 kubelet[1535]: E0209 19:57:04.528075 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.628690 kubelet[1535]: E0209 19:57:04.628619 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.729233 kubelet[1535]: E0209 19:57:04.729192 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.793124 kubelet[1535]: E0209 19:57:04.793092 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:04.829511 kubelet[1535]: E0209 19:57:04.829486 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:04.930644 kubelet[1535]: E0209 19:57:04.930570 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.031114 kubelet[1535]: E0209 19:57:05.031087 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.131544 kubelet[1535]: E0209 19:57:05.131468 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.232296 kubelet[1535]: E0209 19:57:05.232222 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.332885 kubelet[1535]: E0209 19:57:05.332859 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.433415 kubelet[1535]: E0209 19:57:05.433364 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.534006 kubelet[1535]: E0209 19:57:05.533935 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.634578 kubelet[1535]: E0209 19:57:05.634554 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.735174 kubelet[1535]: E0209 19:57:05.735126 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.793762 kubelet[1535]: E0209 19:57:05.793742 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:05.836190 kubelet[1535]: E0209 19:57:05.836164 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:05.936877 kubelet[1535]: E0209 19:57:05.936846 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.037959 kubelet[1535]: E0209 19:57:06.037932 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.138475 kubelet[1535]: E0209 19:57:06.138380 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.238936 kubelet[1535]: E0209 19:57:06.238907 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.339651 kubelet[1535]: E0209 19:57:06.339623 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.440321 kubelet[1535]: E0209 19:57:06.440254 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.540992 kubelet[1535]: E0209 19:57:06.540965 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.641581 kubelet[1535]: E0209 19:57:06.641548 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.742202 kubelet[1535]: E0209 19:57:06.742110 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.794664 kubelet[1535]: E0209 19:57:06.794635 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:06.842984 kubelet[1535]: E0209 19:57:06.842959 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:06.943669 kubelet[1535]: E0209 19:57:06.943643 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:07.044235 kubelet[1535]: E0209 19:57:07.044219 1535 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.124.136\" not found" Feb 9 19:57:07.144911 kubelet[1535]: I0209 19:57:07.144896 1535 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:57:07.145286 env[1156]: time="2024-02-09T19:57:07.145221645Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:57:07.145492 kubelet[1535]: I0209 19:57:07.145369 1535 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:57:07.781904 kubelet[1535]: E0209 19:57:07.781878 1535 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:07.793267 kubelet[1535]: I0209 19:57:07.793237 1535 apiserver.go:52] "Watching apiserver" Feb 9 19:57:07.795032 kubelet[1535]: E0209 19:57:07.795013 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:07.795495 kubelet[1535]: I0209 19:57:07.795475 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:57:07.795559 kubelet[1535]: I0209 19:57:07.795524 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:57:07.800010 kubelet[1535]: I0209 19:57:07.799287 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27adf0dc-4267-4a1d-843e-15d21f61b791-kube-proxy\") pod \"kube-proxy-fkcpl\" (UID: \"27adf0dc-4267-4a1d-843e-15d21f61b791\") " pod="kube-system/kube-proxy-fkcpl" Feb 9 19:57:07.800010 kubelet[1535]: I0209 19:57:07.799574 1535 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:57:07.799885 systemd[1]: Created slice kubepods-burstable-pod947522a3_86b0_4997_be54_94d8502b096e.slice. Feb 9 19:57:07.809907 systemd[1]: Created slice kubepods-besteffort-pod27adf0dc_4267_4a1d_843e_15d21f61b791.slice. Feb 9 19:57:07.900311 kubelet[1535]: I0209 19:57:07.900286 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-xtables-lock\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.900652 kubelet[1535]: I0209 19:57:07.900640 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/947522a3-86b0-4997-be54-94d8502b096e-clustermesh-secrets\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.900760 kubelet[1535]: I0209 19:57:07.900750 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-net\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.900857 kubelet[1535]: I0209 19:57:07.900848 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mffjp\" (UniqueName: \"kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-kube-api-access-mffjp\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.900953 kubelet[1535]: I0209 19:57:07.900945 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9fpw\" (UniqueName: \"kubernetes.io/projected/27adf0dc-4267-4a1d-843e-15d21f61b791-kube-api-access-r9fpw\") pod \"kube-proxy-fkcpl\" (UID: \"27adf0dc-4267-4a1d-843e-15d21f61b791\") " pod="kube-system/kube-proxy-fkcpl" Feb 9 19:57:07.901064 kubelet[1535]: I0209 19:57:07.901054 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-hostproc\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.901155 kubelet[1535]: I0209 19:57:07.901146 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-bpf-maps\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.901245 kubelet[1535]: I0209 19:57:07.901237 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/947522a3-86b0-4997-be54-94d8502b096e-cilium-config-path\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.901338 kubelet[1535]: I0209 19:57:07.901329 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-lib-modules\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.901455 kubelet[1535]: I0209 19:57:07.901445 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-run\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.902060 kubelet[1535]: I0209 19:57:07.902049 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-cgroup\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.902163 kubelet[1535]: I0209 19:57:07.902154 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cni-path\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.902258 kubelet[1535]: I0209 19:57:07.902248 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-etc-cni-netd\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.902355 kubelet[1535]: I0209 19:57:07.902346 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-kernel\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.902457 kubelet[1535]: I0209 19:57:07.902447 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-hubble-tls\") pod \"cilium-4gkgs\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " pod="kube-system/cilium-4gkgs" Feb 9 19:57:07.902555 kubelet[1535]: I0209 19:57:07.902545 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27adf0dc-4267-4a1d-843e-15d21f61b791-xtables-lock\") pod \"kube-proxy-fkcpl\" (UID: \"27adf0dc-4267-4a1d-843e-15d21f61b791\") " pod="kube-system/kube-proxy-fkcpl" Feb 9 19:57:07.902641 kubelet[1535]: I0209 19:57:07.902633 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27adf0dc-4267-4a1d-843e-15d21f61b791-lib-modules\") pod \"kube-proxy-fkcpl\" (UID: \"27adf0dc-4267-4a1d-843e-15d21f61b791\") " pod="kube-system/kube-proxy-fkcpl" Feb 9 19:57:07.902727 kubelet[1535]: I0209 19:57:07.902716 1535 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:57:08.110183 env[1156]: time="2024-02-09T19:57:08.109726067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gkgs,Uid:947522a3-86b0-4997-be54-94d8502b096e,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:08.416759 env[1156]: time="2024-02-09T19:57:08.416694728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkcpl,Uid:27adf0dc-4267-4a1d-843e-15d21f61b791,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:08.795421 kubelet[1535]: E0209 19:57:08.795390 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:08.829953 env[1156]: time="2024-02-09T19:57:08.829901429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.830482 env[1156]: time="2024-02-09T19:57:08.830469608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.830991 env[1156]: time="2024-02-09T19:57:08.830975708Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.832033 env[1156]: time="2024-02-09T19:57:08.832020756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.833310 env[1156]: time="2024-02-09T19:57:08.833294883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.834725 env[1156]: time="2024-02-09T19:57:08.834710514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.835360 env[1156]: time="2024-02-09T19:57:08.835347387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.838141 env[1156]: time="2024-02-09T19:57:08.838086282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:08.855220 env[1156]: time="2024-02-09T19:57:08.845138601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:08.855220 env[1156]: time="2024-02-09T19:57:08.845156499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:08.855220 env[1156]: time="2024-02-09T19:57:08.845163695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:08.855220 env[1156]: time="2024-02-09T19:57:08.845243774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ce1f1ac0336c7bc65ae6577de998d9ed9541f35ce2cd1eadf1b94342d217150 pid=1652 runtime=io.containerd.runc.v2 Feb 9 19:57:08.855406 env[1156]: time="2024-02-09T19:57:08.844958796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:08.855406 env[1156]: time="2024-02-09T19:57:08.844989688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:08.855406 env[1156]: time="2024-02-09T19:57:08.844997120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:08.855406 env[1156]: time="2024-02-09T19:57:08.845068944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9 pid=1649 runtime=io.containerd.runc.v2 Feb 9 19:57:08.864174 systemd[1]: Started cri-containerd-4ce1f1ac0336c7bc65ae6577de998d9ed9541f35ce2cd1eadf1b94342d217150.scope. Feb 9 19:57:08.865263 systemd[1]: Started cri-containerd-a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9.scope. Feb 9 19:57:08.896355 env[1156]: time="2024-02-09T19:57:08.896323682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkcpl,Uid:27adf0dc-4267-4a1d-843e-15d21f61b791,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ce1f1ac0336c7bc65ae6577de998d9ed9541f35ce2cd1eadf1b94342d217150\"" Feb 9 19:57:08.897776 env[1156]: time="2024-02-09T19:57:08.897750062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:57:08.901658 env[1156]: time="2024-02-09T19:57:08.901631729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gkgs,Uid:947522a3-86b0-4997-be54-94d8502b096e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\"" Feb 9 19:57:09.009938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692179009.mount: Deactivated successfully. Feb 9 19:57:09.796027 kubelet[1535]: E0209 19:57:09.795990 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:10.031887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302672648.mount: Deactivated successfully. Feb 9 19:57:10.469523 env[1156]: time="2024-02-09T19:57:10.469471358Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:10.470171 env[1156]: time="2024-02-09T19:57:10.470154536Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:10.470984 env[1156]: time="2024-02-09T19:57:10.470967208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:10.471949 env[1156]: time="2024-02-09T19:57:10.471937463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:10.472366 env[1156]: time="2024-02-09T19:57:10.472346075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:57:10.473012 env[1156]: time="2024-02-09T19:57:10.472992750Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:57:10.474002 env[1156]: time="2024-02-09T19:57:10.473983201Z" level=info msg="CreateContainer within sandbox \"4ce1f1ac0336c7bc65ae6577de998d9ed9541f35ce2cd1eadf1b94342d217150\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:57:10.482323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571588560.mount: Deactivated successfully. Feb 9 19:57:10.485081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501356216.mount: Deactivated successfully. Feb 9 19:57:10.487173 env[1156]: time="2024-02-09T19:57:10.487147742Z" level=info msg="CreateContainer within sandbox \"4ce1f1ac0336c7bc65ae6577de998d9ed9541f35ce2cd1eadf1b94342d217150\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef3df5546fbf648a99a0b8742d20c1d4a59e043f064c113f2c7671f62db8f53e\"" Feb 9 19:57:10.487810 env[1156]: time="2024-02-09T19:57:10.487785934Z" level=info msg="StartContainer for \"ef3df5546fbf648a99a0b8742d20c1d4a59e043f064c113f2c7671f62db8f53e\"" Feb 9 19:57:10.498331 systemd[1]: Started cri-containerd-ef3df5546fbf648a99a0b8742d20c1d4a59e043f064c113f2c7671f62db8f53e.scope. Feb 9 19:57:10.520347 env[1156]: time="2024-02-09T19:57:10.520313932Z" level=info msg="StartContainer for \"ef3df5546fbf648a99a0b8742d20c1d4a59e043f064c113f2c7671f62db8f53e\" returns successfully" Feb 9 19:57:10.796548 kubelet[1535]: E0209 19:57:10.796521 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:11.100321 kubelet[1535]: I0209 19:57:11.100230 1535 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fkcpl" podStartSLOduration=-9.22337202575458e+09 pod.CreationTimestamp="2024-02-09 19:57:00 +0000 UTC" firstStartedPulling="2024-02-09 19:57:08.897359685 +0000 UTC m=+21.632387993" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:11.100112277 +0000 UTC m=+23.835140594" watchObservedRunningTime="2024-02-09 19:57:11.100196961 +0000 UTC m=+23.835225275" Feb 9 19:57:11.797227 kubelet[1535]: E0209 19:57:11.797179 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:12.797500 kubelet[1535]: E0209 19:57:12.797458 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:13.797773 kubelet[1535]: E0209 19:57:13.797741 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:14.798227 kubelet[1535]: E0209 19:57:14.798204 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:15.398654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819422797.mount: Deactivated successfully. Feb 9 19:57:15.798310 kubelet[1535]: E0209 19:57:15.798290 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:16.798965 kubelet[1535]: E0209 19:57:16.798929 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:17.799232 kubelet[1535]: E0209 19:57:17.799148 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:17.908321 env[1156]: time="2024-02-09T19:57:17.908278489Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:17.941672 env[1156]: time="2024-02-09T19:57:17.941632499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:17.956510 env[1156]: time="2024-02-09T19:57:17.956475776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:17.957300 env[1156]: time="2024-02-09T19:57:17.957273125Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:57:17.959361 env[1156]: time="2024-02-09T19:57:17.959334053Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:57:18.018194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831067407.mount: Deactivated successfully. Feb 9 19:57:18.021894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401934092.mount: Deactivated successfully. Feb 9 19:57:18.081489 env[1156]: time="2024-02-09T19:57:18.081058443Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\"" Feb 9 19:57:18.081489 env[1156]: time="2024-02-09T19:57:18.081421516Z" level=info msg="StartContainer for \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\"" Feb 9 19:57:18.093678 systemd[1]: Started cri-containerd-22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a.scope. Feb 9 19:57:18.138591 env[1156]: time="2024-02-09T19:57:18.138555060Z" level=info msg="StartContainer for \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\" returns successfully" Feb 9 19:57:18.160425 systemd[1]: cri-containerd-22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a.scope: Deactivated successfully. Feb 9 19:57:18.799622 kubelet[1535]: E0209 19:57:18.799586 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:18.832814 env[1156]: time="2024-02-09T19:57:18.832770505Z" level=info msg="shim disconnected" id=22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a Feb 9 19:57:18.832814 env[1156]: time="2024-02-09T19:57:18.832811104Z" level=warning msg="cleaning up after shim disconnected" id=22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a namespace=k8s.io Feb 9 19:57:18.832961 env[1156]: time="2024-02-09T19:57:18.832822648Z" level=info msg="cleaning up dead shim" Feb 9 19:57:18.838893 env[1156]: time="2024-02-09T19:57:18.838863786Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1919 runtime=io.containerd.runc.v2\n" Feb 9 19:57:19.017204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a-rootfs.mount: Deactivated successfully. Feb 9 19:57:19.117133 env[1156]: time="2024-02-09T19:57:19.116736239Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:57:19.124197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670584519.mount: Deactivated successfully. Feb 9 19:57:19.127976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853460011.mount: Deactivated successfully. Feb 9 19:57:19.130768 env[1156]: time="2024-02-09T19:57:19.130735961Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\"" Feb 9 19:57:19.131355 env[1156]: time="2024-02-09T19:57:19.131337871Z" level=info msg="StartContainer for \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\"" Feb 9 19:57:19.142710 systemd[1]: Started cri-containerd-d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8.scope. Feb 9 19:57:19.165719 env[1156]: time="2024-02-09T19:57:19.165687813Z" level=info msg="StartContainer for \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\" returns successfully" Feb 9 19:57:19.172889 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:57:19.173128 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:57:19.173343 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:57:19.175272 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:57:19.178763 systemd[1]: cri-containerd-d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8.scope: Deactivated successfully. Feb 9 19:57:19.193104 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:57:19.224530 env[1156]: time="2024-02-09T19:57:19.224495791Z" level=info msg="shim disconnected" id=d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8 Feb 9 19:57:19.224686 env[1156]: time="2024-02-09T19:57:19.224674992Z" level=warning msg="cleaning up after shim disconnected" id=d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8 namespace=k8s.io Feb 9 19:57:19.224734 env[1156]: time="2024-02-09T19:57:19.224724270Z" level=info msg="cleaning up dead shim" Feb 9 19:57:19.230548 env[1156]: time="2024-02-09T19:57:19.230516584Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1985 runtime=io.containerd.runc.v2\n" Feb 9 19:57:19.800406 kubelet[1535]: E0209 19:57:19.800380 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:20.119649 env[1156]: time="2024-02-09T19:57:20.119347615Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:57:20.183624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222629955.mount: Deactivated successfully. Feb 9 19:57:20.187806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651901798.mount: Deactivated successfully. Feb 9 19:57:20.214922 env[1156]: time="2024-02-09T19:57:20.214867855Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\"" Feb 9 19:57:20.215297 env[1156]: time="2024-02-09T19:57:20.215275671Z" level=info msg="StartContainer for \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\"" Feb 9 19:57:20.228580 systemd[1]: Started cri-containerd-53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886.scope. Feb 9 19:57:20.271241 systemd[1]: cri-containerd-53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886.scope: Deactivated successfully. Feb 9 19:57:20.274480 env[1156]: time="2024-02-09T19:57:20.274454155Z" level=info msg="StartContainer for \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\" returns successfully" Feb 9 19:57:20.331130 env[1156]: time="2024-02-09T19:57:20.331103842Z" level=info msg="shim disconnected" id=53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886 Feb 9 19:57:20.331285 env[1156]: time="2024-02-09T19:57:20.331273277Z" level=warning msg="cleaning up after shim disconnected" id=53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886 namespace=k8s.io Feb 9 19:57:20.331332 env[1156]: time="2024-02-09T19:57:20.331322679Z" level=info msg="cleaning up dead shim" Feb 9 19:57:20.336017 env[1156]: time="2024-02-09T19:57:20.335995108Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2043 runtime=io.containerd.runc.v2\n" Feb 9 19:57:20.800798 kubelet[1535]: E0209 19:57:20.800751 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:21.120895 env[1156]: time="2024-02-09T19:57:21.120808534Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:57:21.143601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829416812.mount: Deactivated successfully. Feb 9 19:57:21.145717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540076767.mount: Deactivated successfully. Feb 9 19:57:21.147489 env[1156]: time="2024-02-09T19:57:21.147464363Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\"" Feb 9 19:57:21.147893 env[1156]: time="2024-02-09T19:57:21.147879503Z" level=info msg="StartContainer for \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\"" Feb 9 19:57:21.159195 systemd[1]: Started cri-containerd-88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932.scope. Feb 9 19:57:21.176766 systemd[1]: cri-containerd-88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932.scope: Deactivated successfully. Feb 9 19:57:21.177620 env[1156]: time="2024-02-09T19:57:21.177549721Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947522a3_86b0_4997_be54_94d8502b096e.slice/cri-containerd-88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932.scope/memory.events\": no such file or directory" Feb 9 19:57:21.188859 env[1156]: time="2024-02-09T19:57:21.188829337Z" level=info msg="StartContainer for \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\" returns successfully" Feb 9 19:57:21.220879 env[1156]: time="2024-02-09T19:57:21.220848771Z" level=info msg="shim disconnected" id=88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932 Feb 9 19:57:21.220879 env[1156]: time="2024-02-09T19:57:21.220877409Z" level=warning msg="cleaning up after shim disconnected" id=88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932 namespace=k8s.io Feb 9 19:57:21.220879 env[1156]: time="2024-02-09T19:57:21.220883907Z" level=info msg="cleaning up dead shim" Feb 9 19:57:21.226831 env[1156]: time="2024-02-09T19:57:21.226795512Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2097 runtime=io.containerd.runc.v2\n" Feb 9 19:57:21.801294 kubelet[1535]: E0209 19:57:21.801255 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:22.123411 env[1156]: time="2024-02-09T19:57:22.123196378Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:57:22.130322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633385311.mount: Deactivated successfully. Feb 9 19:57:22.133231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076281606.mount: Deactivated successfully. Feb 9 19:57:22.135067 env[1156]: time="2024-02-09T19:57:22.135043691Z" level=info msg="CreateContainer within sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\"" Feb 9 19:57:22.135510 env[1156]: time="2024-02-09T19:57:22.135493395Z" level=info msg="StartContainer for \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\"" Feb 9 19:57:22.144251 systemd[1]: Started cri-containerd-54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0.scope. Feb 9 19:57:22.166951 env[1156]: time="2024-02-09T19:57:22.166924347Z" level=info msg="StartContainer for \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\" returns successfully" Feb 9 19:57:22.233447 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:57:22.249728 kubelet[1535]: I0209 19:57:22.249180 1535 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:57:22.495450 kernel: Initializing XFRM netlink socket Feb 9 19:57:22.497439 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:57:22.801938 kubelet[1535]: E0209 19:57:22.801912 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:23.135180 kubelet[1535]: I0209 19:57:23.135092 1535 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4gkgs" podStartSLOduration=-9.223372013719719e+09 pod.CreationTimestamp="2024-02-09 19:57:00 +0000 UTC" firstStartedPulling="2024-02-09 19:57:08.902559194 +0000 UTC m=+21.637587498" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:23.134907412 +0000 UTC m=+35.869935728" watchObservedRunningTime="2024-02-09 19:57:23.135057822 +0000 UTC m=+35.870086131" Feb 9 19:57:23.802415 kubelet[1535]: E0209 19:57:23.802385 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:24.099276 systemd-networkd[1065]: cilium_host: Link UP Feb 9 19:57:24.100355 systemd-networkd[1065]: cilium_net: Link UP Feb 9 19:57:24.102491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:57:24.102526 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:57:24.102735 systemd-networkd[1065]: cilium_net: Gained carrier Feb 9 19:57:24.102835 systemd-networkd[1065]: cilium_host: Gained carrier Feb 9 19:57:24.210071 systemd-networkd[1065]: cilium_vxlan: Link UP Feb 9 19:57:24.210075 systemd-networkd[1065]: cilium_vxlan: Gained carrier Feb 9 19:57:24.261539 systemd-networkd[1065]: cilium_net: Gained IPv6LL Feb 9 19:57:24.349447 kernel: NET: Registered PF_ALG protocol family Feb 9 19:57:24.803493 kubelet[1535]: E0209 19:57:24.803468 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:24.810493 systemd-networkd[1065]: lxc_health: Link UP Feb 9 19:57:24.854688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:57:24.851901 systemd-networkd[1065]: lxc_health: Gained carrier Feb 9 19:57:24.933551 systemd-networkd[1065]: cilium_host: Gained IPv6LL Feb 9 19:57:25.445557 systemd-networkd[1065]: cilium_vxlan: Gained IPv6LL Feb 9 19:57:25.804209 kubelet[1535]: E0209 19:57:25.804183 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:26.213514 systemd-networkd[1065]: lxc_health: Gained IPv6LL Feb 9 19:57:26.805040 kubelet[1535]: E0209 19:57:26.805018 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:27.781786 kubelet[1535]: E0209 19:57:27.781754 1535 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:27.805989 kubelet[1535]: E0209 19:57:27.805960 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:28.242227 kubelet[1535]: I0209 19:57:28.242203 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:57:28.245660 systemd[1]: Created slice kubepods-besteffort-poddcf9add7_8751_45ac_9ab7_c5e63f5d1135.slice. Feb 9 19:57:28.318764 kubelet[1535]: I0209 19:57:28.318735 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sd2q\" (UniqueName: \"kubernetes.io/projected/dcf9add7-8751-45ac-9ab7-c5e63f5d1135-kube-api-access-8sd2q\") pod \"nginx-deployment-8ffc5cf85-pdmm9\" (UID: \"dcf9add7-8751-45ac-9ab7-c5e63f5d1135\") " pod="default/nginx-deployment-8ffc5cf85-pdmm9" Feb 9 19:57:28.548338 env[1156]: time="2024-02-09T19:57:28.548300232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-pdmm9,Uid:dcf9add7-8751-45ac-9ab7-c5e63f5d1135,Namespace:default,Attempt:0,}" Feb 9 19:57:28.589040 systemd-networkd[1065]: lxc0a42341a7ebb: Link UP Feb 9 19:57:28.595453 kernel: eth0: renamed from tmp6827c Feb 9 19:57:28.598797 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:57:28.598846 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0a42341a7ebb: link becomes ready Feb 9 19:57:28.598933 systemd-networkd[1065]: lxc0a42341a7ebb: Gained carrier Feb 9 19:57:28.752539 env[1156]: time="2024-02-09T19:57:28.752484749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:28.752689 env[1156]: time="2024-02-09T19:57:28.752670366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:28.752789 env[1156]: time="2024-02-09T19:57:28.752763407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:28.752983 env[1156]: time="2024-02-09T19:57:28.752964233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6827c7e10b7eba7167e4388b099da6d5b697cf0bfb44fa6a321bcafe9f5923f3 pid=2637 runtime=io.containerd.runc.v2 Feb 9 19:57:28.761780 systemd[1]: Started cri-containerd-6827c7e10b7eba7167e4388b099da6d5b697cf0bfb44fa6a321bcafe9f5923f3.scope. Feb 9 19:57:28.774094 systemd-resolved[1106]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:57:28.792791 env[1156]: time="2024-02-09T19:57:28.792762711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-pdmm9,Uid:dcf9add7-8751-45ac-9ab7-c5e63f5d1135,Namespace:default,Attempt:0,} returns sandbox id \"6827c7e10b7eba7167e4388b099da6d5b697cf0bfb44fa6a321bcafe9f5923f3\"" Feb 9 19:57:28.793673 env[1156]: time="2024-02-09T19:57:28.793654760Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:57:28.806742 kubelet[1535]: E0209 19:57:28.806312 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:29.425620 systemd[1]: run-containerd-runc-k8s.io-6827c7e10b7eba7167e4388b099da6d5b697cf0bfb44fa6a321bcafe9f5923f3-runc.QLLn4f.mount: Deactivated successfully. Feb 9 19:57:29.806445 kubelet[1535]: E0209 19:57:29.806411 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:30.629575 systemd-networkd[1065]: lxc0a42341a7ebb: Gained IPv6LL Feb 9 19:57:30.806676 kubelet[1535]: E0209 19:57:30.806643 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:31.806997 kubelet[1535]: E0209 19:57:31.806974 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:32.223804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534022606.mount: Deactivated successfully. Feb 9 19:57:32.807074 kubelet[1535]: E0209 19:57:32.807039 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:33.081779 env[1156]: time="2024-02-09T19:57:33.081534869Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:33.082542 env[1156]: time="2024-02-09T19:57:33.082524120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:33.083388 env[1156]: time="2024-02-09T19:57:33.083373104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:33.084328 env[1156]: time="2024-02-09T19:57:33.084313145Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:33.084807 env[1156]: time="2024-02-09T19:57:33.084790059Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:57:33.085883 env[1156]: time="2024-02-09T19:57:33.085860145Z" level=info msg="CreateContainer within sandbox \"6827c7e10b7eba7167e4388b099da6d5b697cf0bfb44fa6a321bcafe9f5923f3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:57:33.091349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824266617.mount: Deactivated successfully. Feb 9 19:57:33.093936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989031157.mount: Deactivated successfully. Feb 9 19:57:33.105383 env[1156]: time="2024-02-09T19:57:33.105351383Z" level=info msg="CreateContainer within sandbox \"6827c7e10b7eba7167e4388b099da6d5b697cf0bfb44fa6a321bcafe9f5923f3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4d0a9bcce8d796dfb1c84ece5cb24710b85f68560ff5d5efb1307bfa24374d04\"" Feb 9 19:57:33.105725 env[1156]: time="2024-02-09T19:57:33.105712257Z" level=info msg="StartContainer for \"4d0a9bcce8d796dfb1c84ece5cb24710b85f68560ff5d5efb1307bfa24374d04\"" Feb 9 19:57:33.117273 systemd[1]: Started cri-containerd-4d0a9bcce8d796dfb1c84ece5cb24710b85f68560ff5d5efb1307bfa24374d04.scope. Feb 9 19:57:33.137696 env[1156]: time="2024-02-09T19:57:33.137668589Z" level=info msg="StartContainer for \"4d0a9bcce8d796dfb1c84ece5cb24710b85f68560ff5d5efb1307bfa24374d04\" returns successfully" Feb 9 19:57:33.807324 kubelet[1535]: E0209 19:57:33.807289 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:34.143458 kubelet[1535]: I0209 19:57:34.143190 1535 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-pdmm9" podStartSLOduration=-9.223372030711617e+09 pod.CreationTimestamp="2024-02-09 19:57:28 +0000 UTC" firstStartedPulling="2024-02-09 19:57:28.793362226 +0000 UTC m=+41.528390531" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:34.142948866 +0000 UTC m=+46.877977185" watchObservedRunningTime="2024-02-09 19:57:34.14315977 +0000 UTC m=+46.878188086" Feb 9 19:57:34.808366 kubelet[1535]: E0209 19:57:34.808341 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:35.809684 kubelet[1535]: E0209 19:57:35.809649 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:36.809990 kubelet[1535]: E0209 19:57:36.809967 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:37.811293 kubelet[1535]: E0209 19:57:37.811266 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:38.812072 kubelet[1535]: E0209 19:57:38.812044 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:39.812964 kubelet[1535]: E0209 19:57:39.812938 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:40.755039 kubelet[1535]: I0209 19:57:40.755009 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:57:40.758652 systemd[1]: Created slice kubepods-besteffort-podace50d96_c043_4abc_9acd_606ae7325f35.slice. Feb 9 19:57:40.784595 kubelet[1535]: I0209 19:57:40.784504 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ace50d96-c043-4abc-9acd-606ae7325f35-data\") pod \"nfs-server-provisioner-0\" (UID: \"ace50d96-c043-4abc-9acd-606ae7325f35\") " pod="default/nfs-server-provisioner-0" Feb 9 19:57:40.784595 kubelet[1535]: I0209 19:57:40.784541 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbbc\" (UniqueName: \"kubernetes.io/projected/ace50d96-c043-4abc-9acd-606ae7325f35-kube-api-access-vvbbc\") pod \"nfs-server-provisioner-0\" (UID: \"ace50d96-c043-4abc-9acd-606ae7325f35\") " pod="default/nfs-server-provisioner-0" Feb 9 19:57:40.813660 kubelet[1535]: E0209 19:57:40.813619 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:41.061351 env[1156]: time="2024-02-09T19:57:41.061326577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ace50d96-c043-4abc-9acd-606ae7325f35,Namespace:default,Attempt:0,}" Feb 9 19:57:41.185132 systemd-networkd[1065]: lxce7bcf02ac96d: Link UP Feb 9 19:57:41.190446 kernel: eth0: renamed from tmp3cbdb Feb 9 19:57:41.197615 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:57:41.197685 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce7bcf02ac96d: link becomes ready Feb 9 19:57:41.197743 systemd-networkd[1065]: lxce7bcf02ac96d: Gained carrier Feb 9 19:57:41.347052 env[1156]: time="2024-02-09T19:57:41.346637405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:41.347052 env[1156]: time="2024-02-09T19:57:41.346661298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:41.347052 env[1156]: time="2024-02-09T19:57:41.346668662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:41.347052 env[1156]: time="2024-02-09T19:57:41.346766285Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cbdb1f00b7258170a1b0c6ba8327af8a94dcee3f07a3fab609ed068e04db6eb pid=2821 runtime=io.containerd.runc.v2 Feb 9 19:57:41.359021 systemd[1]: Started cri-containerd-3cbdb1f00b7258170a1b0c6ba8327af8a94dcee3f07a3fab609ed068e04db6eb.scope. Feb 9 19:57:41.369465 systemd-resolved[1106]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:57:41.389036 env[1156]: time="2024-02-09T19:57:41.389002240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ace50d96-c043-4abc-9acd-606ae7325f35,Namespace:default,Attempt:0,} returns sandbox id \"3cbdb1f00b7258170a1b0c6ba8327af8a94dcee3f07a3fab609ed068e04db6eb\"" Feb 9 19:57:41.390002 env[1156]: time="2024-02-09T19:57:41.389988874Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:57:41.814417 kubelet[1535]: E0209 19:57:41.814377 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:41.891379 systemd[1]: run-containerd-runc-k8s.io-3cbdb1f00b7258170a1b0c6ba8327af8a94dcee3f07a3fab609ed068e04db6eb-runc.2ex2pN.mount: Deactivated successfully. Feb 9 19:57:42.814932 kubelet[1535]: E0209 19:57:42.814889 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:43.109614 systemd-networkd[1065]: lxce7bcf02ac96d: Gained IPv6LL Feb 9 19:57:43.798238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635208765.mount: Deactivated successfully. Feb 9 19:57:43.815382 kubelet[1535]: E0209 19:57:43.815346 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:44.815573 kubelet[1535]: E0209 19:57:44.815546 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:45.755849 env[1156]: time="2024-02-09T19:57:45.755802459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:45.756810 env[1156]: time="2024-02-09T19:57:45.756784731Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:45.758085 env[1156]: time="2024-02-09T19:57:45.758052351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:45.759270 env[1156]: time="2024-02-09T19:57:45.759247727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:45.759948 env[1156]: time="2024-02-09T19:57:45.759930489Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:57:45.761990 env[1156]: time="2024-02-09T19:57:45.761961060Z" level=info msg="CreateContainer within sandbox \"3cbdb1f00b7258170a1b0c6ba8327af8a94dcee3f07a3fab609ed068e04db6eb\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:57:45.767151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245742567.mount: Deactivated successfully. Feb 9 19:57:45.770099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584566578.mount: Deactivated successfully. Feb 9 19:57:45.772779 env[1156]: time="2024-02-09T19:57:45.772750660Z" level=info msg="CreateContainer within sandbox \"3cbdb1f00b7258170a1b0c6ba8327af8a94dcee3f07a3fab609ed068e04db6eb\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"151c53129092d53205b16b1ed9488c57a4fcf20200b44bfad66ca3e36dff9c7a\"" Feb 9 19:57:45.773184 env[1156]: time="2024-02-09T19:57:45.773165496Z" level=info msg="StartContainer for \"151c53129092d53205b16b1ed9488c57a4fcf20200b44bfad66ca3e36dff9c7a\"" Feb 9 19:57:45.789911 systemd[1]: Started cri-containerd-151c53129092d53205b16b1ed9488c57a4fcf20200b44bfad66ca3e36dff9c7a.scope. Feb 9 19:57:45.812516 env[1156]: time="2024-02-09T19:57:45.812474276Z" level=info msg="StartContainer for \"151c53129092d53205b16b1ed9488c57a4fcf20200b44bfad66ca3e36dff9c7a\" returns successfully" Feb 9 19:57:45.818673 kubelet[1535]: E0209 19:57:45.818551 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:46.819166 kubelet[1535]: E0209 19:57:46.819136 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:47.781535 kubelet[1535]: E0209 19:57:47.781488 1535 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:47.820359 kubelet[1535]: E0209 19:57:47.820326 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:48.820862 kubelet[1535]: E0209 19:57:48.820817 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:49.821392 kubelet[1535]: E0209 19:57:49.821337 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:50.822043 kubelet[1535]: E0209 19:57:50.822015 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:51.822463 kubelet[1535]: E0209 19:57:51.822425 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:52.823078 kubelet[1535]: E0209 19:57:52.823029 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:53.823609 kubelet[1535]: E0209 19:57:53.823563 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:54.824109 kubelet[1535]: E0209 19:57:54.824047 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:55.521794 kubelet[1535]: I0209 19:57:55.521715 1535 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372021333122e+09 pod.CreationTimestamp="2024-02-09 19:57:40 +0000 UTC" firstStartedPulling="2024-02-09 19:57:41.389773455 +0000 UTC m=+54.124801759" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:46.164470897 +0000 UTC m=+58.899499214" watchObservedRunningTime="2024-02-09 19:57:55.521654477 +0000 UTC m=+68.256682794" Feb 9 19:57:55.521989 kubelet[1535]: I0209 19:57:55.521819 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:57:55.526142 systemd[1]: Created slice kubepods-besteffort-pode7efed1a_c8a9_4f2d_8d41_30727c7335ba.slice. Feb 9 19:57:55.660523 kubelet[1535]: I0209 19:57:55.660493 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgdtc\" (UniqueName: \"kubernetes.io/projected/e7efed1a-c8a9-4f2d-8d41-30727c7335ba-kube-api-access-jgdtc\") pod \"test-pod-1\" (UID: \"e7efed1a-c8a9-4f2d-8d41-30727c7335ba\") " pod="default/test-pod-1" Feb 9 19:57:55.660698 kubelet[1535]: I0209 19:57:55.660526 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2755722-d307-49ff-aa19-4088130be139\" (UniqueName: \"kubernetes.io/nfs/e7efed1a-c8a9-4f2d-8d41-30727c7335ba-pvc-d2755722-d307-49ff-aa19-4088130be139\") pod \"test-pod-1\" (UID: \"e7efed1a-c8a9-4f2d-8d41-30727c7335ba\") " pod="default/test-pod-1" Feb 9 19:57:55.826356 kubelet[1535]: E0209 19:57:55.826228 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:55.979448 kernel: FS-Cache: Loaded Feb 9 19:57:56.007456 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:57:56.007563 kernel: RPC: Registered udp transport module. Feb 9 19:57:56.007591 kernel: RPC: Registered tcp transport module. Feb 9 19:57:56.007616 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:57:56.051457 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:57:56.174708 kernel: NFS: Registering the id_resolver key type Feb 9 19:57:56.174837 kernel: Key type id_resolver registered Feb 9 19:57:56.174877 kernel: Key type id_legacy registered Feb 9 19:57:56.203241 nfsidmap[2965]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:57:56.204939 nfsidmap[2966]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:57:56.428481 env[1156]: time="2024-02-09T19:57:56.428424986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e7efed1a-c8a9-4f2d-8d41-30727c7335ba,Namespace:default,Attempt:0,}" Feb 9 19:57:56.462684 systemd-networkd[1065]: lxc38037b0bec1a: Link UP Feb 9 19:57:56.469502 kernel: eth0: renamed from tmp11f3b Feb 9 19:57:56.475685 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:57:56.475763 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc38037b0bec1a: link becomes ready Feb 9 19:57:56.475743 systemd-networkd[1065]: lxc38037b0bec1a: Gained carrier Feb 9 19:57:56.674226 env[1156]: time="2024-02-09T19:57:56.674050366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:56.674226 env[1156]: time="2024-02-09T19:57:56.674081164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:56.674226 env[1156]: time="2024-02-09T19:57:56.674091653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:56.674508 env[1156]: time="2024-02-09T19:57:56.674468673Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f3bb1ee2c5838d0c4f08a61fc1ec8c4aa740b4d21217876082daae442c5350 pid=3007 runtime=io.containerd.runc.v2 Feb 9 19:57:56.682693 systemd[1]: Started cri-containerd-11f3bb1ee2c5838d0c4f08a61fc1ec8c4aa740b4d21217876082daae442c5350.scope. Feb 9 19:57:56.694174 systemd-resolved[1106]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:57:56.720472 env[1156]: time="2024-02-09T19:57:56.720071949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e7efed1a-c8a9-4f2d-8d41-30727c7335ba,Namespace:default,Attempt:0,} returns sandbox id \"11f3bb1ee2c5838d0c4f08a61fc1ec8c4aa740b4d21217876082daae442c5350\"" Feb 9 19:57:56.721355 env[1156]: time="2024-02-09T19:57:56.721338350Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:57:56.827220 kubelet[1535]: E0209 19:57:56.827200 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:57.212209 env[1156]: time="2024-02-09T19:57:57.212166987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:57.218481 env[1156]: time="2024-02-09T19:57:57.218460119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:57.222740 env[1156]: time="2024-02-09T19:57:57.222720740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:57.224904 env[1156]: time="2024-02-09T19:57:57.224880235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:57.225551 env[1156]: time="2024-02-09T19:57:57.225531344Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:57:57.227940 env[1156]: time="2024-02-09T19:57:57.227913615Z" level=info msg="CreateContainer within sandbox \"11f3bb1ee2c5838d0c4f08a61fc1ec8c4aa740b4d21217876082daae442c5350\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:57:57.240915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823695530.mount: Deactivated successfully. Feb 9 19:57:57.243421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280672312.mount: Deactivated successfully. Feb 9 19:57:57.245480 env[1156]: time="2024-02-09T19:57:57.245456763Z" level=info msg="CreateContainer within sandbox \"11f3bb1ee2c5838d0c4f08a61fc1ec8c4aa740b4d21217876082daae442c5350\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"68052e1fc09a1bd775ea5280c871385c93f9ee5c95fb409bb60c295edfd2fe68\"" Feb 9 19:57:57.246125 env[1156]: time="2024-02-09T19:57:57.246102178Z" level=info msg="StartContainer for \"68052e1fc09a1bd775ea5280c871385c93f9ee5c95fb409bb60c295edfd2fe68\"" Feb 9 19:57:57.256498 systemd[1]: Started cri-containerd-68052e1fc09a1bd775ea5280c871385c93f9ee5c95fb409bb60c295edfd2fe68.scope. Feb 9 19:57:57.278550 env[1156]: time="2024-02-09T19:57:57.278520507Z" level=info msg="StartContainer for \"68052e1fc09a1bd775ea5280c871385c93f9ee5c95fb409bb60c295edfd2fe68\" returns successfully" Feb 9 19:57:57.827598 kubelet[1535]: E0209 19:57:57.827565 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:58.180442 kubelet[1535]: I0209 19:57:58.180229 1535 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372019674568e+09 pod.CreationTimestamp="2024-02-09 19:57:41 +0000 UTC" firstStartedPulling="2024-02-09 19:57:56.721192879 +0000 UTC m=+69.456221188" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:58.179951357 +0000 UTC m=+70.914979675" watchObservedRunningTime="2024-02-09 19:57:58.180207268 +0000 UTC m=+70.915235583" Feb 9 19:57:58.277613 systemd-networkd[1065]: lxc38037b0bec1a: Gained IPv6LL Feb 9 19:57:58.828590 kubelet[1535]: E0209 19:57:58.828566 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:57:59.829476 kubelet[1535]: E0209 19:57:59.829450 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:00.830306 kubelet[1535]: E0209 19:58:00.830274 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:01.830538 kubelet[1535]: E0209 19:58:01.830506 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:02.831610 kubelet[1535]: E0209 19:58:02.831586 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:03.770036 systemd[1]: run-containerd-runc-k8s.io-54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0-runc.bQ1kgX.mount: Deactivated successfully. Feb 9 19:58:03.800851 env[1156]: time="2024-02-09T19:58:03.800793940Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:58:03.805171 env[1156]: time="2024-02-09T19:58:03.805142115Z" level=info msg="StopContainer for \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\" with timeout 1 (s)" Feb 9 19:58:03.805575 env[1156]: time="2024-02-09T19:58:03.805542720Z" level=info msg="Stop container \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\" with signal terminated" Feb 9 19:58:03.811268 systemd-networkd[1065]: lxc_health: Link DOWN Feb 9 19:58:03.811273 systemd-networkd[1065]: lxc_health: Lost carrier Feb 9 19:58:03.832220 systemd[1]: cri-containerd-54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0.scope: Deactivated successfully. Feb 9 19:58:03.832434 systemd[1]: cri-containerd-54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0.scope: Consumed 4.580s CPU time. Feb 9 19:58:03.832877 kubelet[1535]: E0209 19:58:03.832668 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:03.846347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0-rootfs.mount: Deactivated successfully. Feb 9 19:58:04.349507 env[1156]: time="2024-02-09T19:58:04.349475997Z" level=info msg="shim disconnected" id=54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0 Feb 9 19:58:04.349692 env[1156]: time="2024-02-09T19:58:04.349680915Z" level=warning msg="cleaning up after shim disconnected" id=54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0 namespace=k8s.io Feb 9 19:58:04.349758 env[1156]: time="2024-02-09T19:58:04.349746559Z" level=info msg="cleaning up dead shim" Feb 9 19:58:04.355974 env[1156]: time="2024-02-09T19:58:04.355934946Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3138 runtime=io.containerd.runc.v2\n" Feb 9 19:58:04.357042 env[1156]: time="2024-02-09T19:58:04.357011555Z" level=info msg="StopContainer for \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\" returns successfully" Feb 9 19:58:04.357536 env[1156]: time="2024-02-09T19:58:04.357513964Z" level=info msg="StopPodSandbox for \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\"" Feb 9 19:58:04.357672 env[1156]: time="2024-02-09T19:58:04.357658620Z" level=info msg="Container to stop \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:58:04.357728 env[1156]: time="2024-02-09T19:58:04.357715732Z" level=info msg="Container to stop \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:58:04.357784 env[1156]: time="2024-02-09T19:58:04.357772638Z" level=info msg="Container to stop \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:58:04.357835 env[1156]: time="2024-02-09T19:58:04.357823750Z" level=info msg="Container to stop \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:58:04.357886 env[1156]: time="2024-02-09T19:58:04.357875252Z" level=info msg="Container to stop \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:58:04.359208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9-shm.mount: Deactivated successfully. Feb 9 19:58:04.363921 systemd[1]: cri-containerd-a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9.scope: Deactivated successfully. Feb 9 19:58:04.382368 env[1156]: time="2024-02-09T19:58:04.382319088Z" level=info msg="shim disconnected" id=a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9 Feb 9 19:58:04.382368 env[1156]: time="2024-02-09T19:58:04.382359440Z" level=warning msg="cleaning up after shim disconnected" id=a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9 namespace=k8s.io Feb 9 19:58:04.382368 env[1156]: time="2024-02-09T19:58:04.382367497Z" level=info msg="cleaning up dead shim" Feb 9 19:58:04.389003 env[1156]: time="2024-02-09T19:58:04.388962585Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3169 runtime=io.containerd.runc.v2\n" Feb 9 19:58:04.391167 env[1156]: time="2024-02-09T19:58:04.391129565Z" level=info msg="TearDown network for sandbox \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" successfully" Feb 9 19:58:04.391167 env[1156]: time="2024-02-09T19:58:04.391161312Z" level=info msg="StopPodSandbox for \"a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9\" returns successfully" Feb 9 19:58:04.509910 kubelet[1535]: I0209 19:58:04.509219 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/947522a3-86b0-4997-be54-94d8502b096e-clustermesh-secrets\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.509910 kubelet[1535]: I0209 19:58:04.509253 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-net\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.509910 kubelet[1535]: I0209 19:58:04.509265 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cni-path\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.509910 kubelet[1535]: I0209 19:58:04.509275 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-hostproc\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.509910 kubelet[1535]: I0209 19:58:04.509287 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-lib-modules\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.509910 kubelet[1535]: I0209 19:58:04.509296 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-cgroup\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510160 kubelet[1535]: I0209 19:58:04.509306 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-xtables-lock\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510160 kubelet[1535]: I0209 19:58:04.509319 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/947522a3-86b0-4997-be54-94d8502b096e-cilium-config-path\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510160 kubelet[1535]: I0209 19:58:04.509329 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-run\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510160 kubelet[1535]: I0209 19:58:04.509339 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-etc-cni-netd\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510160 kubelet[1535]: I0209 19:58:04.509351 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-hubble-tls\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510160 kubelet[1535]: I0209 19:58:04.509362 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mffjp\" (UniqueName: \"kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-kube-api-access-mffjp\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510311 kubelet[1535]: I0209 19:58:04.509372 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-bpf-maps\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510311 kubelet[1535]: I0209 19:58:04.509383 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-kernel\") pod \"947522a3-86b0-4997-be54-94d8502b096e\" (UID: \"947522a3-86b0-4997-be54-94d8502b096e\") " Feb 9 19:58:04.510311 kubelet[1535]: I0209 19:58:04.509427 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.510311 kubelet[1535]: I0209 19:58:04.509483 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.510311 kubelet[1535]: W0209 19:58:04.509582 1535 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/947522a3-86b0-4997-be54-94d8502b096e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:58:04.513362 kubelet[1535]: I0209 19:58:04.510454 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513362 kubelet[1535]: I0209 19:58:04.510475 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cni-path" (OuterVolumeSpecName: "cni-path") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513362 kubelet[1535]: I0209 19:58:04.510487 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-hostproc" (OuterVolumeSpecName: "hostproc") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513362 kubelet[1535]: I0209 19:58:04.510501 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513362 kubelet[1535]: I0209 19:58:04.510517 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513600 kubelet[1535]: I0209 19:58:04.510774 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947522a3-86b0-4997-be54-94d8502b096e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:58:04.513600 kubelet[1535]: I0209 19:58:04.510904 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513600 kubelet[1535]: I0209 19:58:04.510919 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513600 kubelet[1535]: I0209 19:58:04.510930 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:04.513600 kubelet[1535]: I0209 19:58:04.512098 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947522a3-86b0-4997-be54-94d8502b096e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:58:04.513779 kubelet[1535]: I0209 19:58:04.513764 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-kube-api-access-mffjp" (OuterVolumeSpecName: "kube-api-access-mffjp") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "kube-api-access-mffjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:58:04.514088 kubelet[1535]: I0209 19:58:04.514077 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "947522a3-86b0-4997-be54-94d8502b096e" (UID: "947522a3-86b0-4997-be54-94d8502b096e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:58:04.610409 kubelet[1535]: I0209 19:58:04.610325 1535 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/947522a3-86b0-4997-be54-94d8502b096e-clustermesh-secrets\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.610549 kubelet[1535]: I0209 19:58:04.610537 1535 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-net\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.610638 kubelet[1535]: I0209 19:58:04.610629 1535 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cni-path\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.610710 kubelet[1535]: I0209 19:58:04.610701 1535 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-hostproc\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.610787 kubelet[1535]: I0209 19:58:04.610773 1535 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-lib-modules\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.610852 kubelet[1535]: I0209 19:58:04.610843 1535 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-cgroup\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.610924 kubelet[1535]: I0209 19:58:04.610916 1535 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-hubble-tls\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.611002 kubelet[1535]: I0209 19:58:04.610994 1535 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-xtables-lock\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.611077 kubelet[1535]: I0209 19:58:04.611069 1535 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/947522a3-86b0-4997-be54-94d8502b096e-cilium-config-path\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.611143 kubelet[1535]: I0209 19:58:04.611134 1535 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-cilium-run\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.611219 kubelet[1535]: I0209 19:58:04.611211 1535 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-etc-cni-netd\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.611290 kubelet[1535]: I0209 19:58:04.611275 1535 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-mffjp\" (UniqueName: \"kubernetes.io/projected/947522a3-86b0-4997-be54-94d8502b096e-kube-api-access-mffjp\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.611364 kubelet[1535]: I0209 19:58:04.611348 1535 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-bpf-maps\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.611468 kubelet[1535]: I0209 19:58:04.611459 1535 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/947522a3-86b0-4997-be54-94d8502b096e-host-proc-sys-kernel\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:04.767178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8317f531737b19479a895ad25825622b30ea5032adbe23a596ed833d422afd9-rootfs.mount: Deactivated successfully. Feb 9 19:58:04.767236 systemd[1]: var-lib-kubelet-pods-947522a3\x2d86b0\x2d4997\x2dbe54\x2d94d8502b096e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmffjp.mount: Deactivated successfully. Feb 9 19:58:04.767274 systemd[1]: var-lib-kubelet-pods-947522a3\x2d86b0\x2d4997\x2dbe54\x2d94d8502b096e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:58:04.767315 systemd[1]: var-lib-kubelet-pods-947522a3\x2d86b0\x2d4997\x2dbe54\x2d94d8502b096e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:58:04.832962 kubelet[1535]: E0209 19:58:04.832906 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:05.185630 kubelet[1535]: I0209 19:58:05.185605 1535 scope.go:115] "RemoveContainer" containerID="54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0" Feb 9 19:58:05.187467 systemd[1]: Removed slice kubepods-burstable-pod947522a3_86b0_4997_be54_94d8502b096e.slice. Feb 9 19:58:05.187539 systemd[1]: kubepods-burstable-pod947522a3_86b0_4997_be54_94d8502b096e.slice: Consumed 4.651s CPU time. Feb 9 19:58:05.188376 env[1156]: time="2024-02-09T19:58:05.188355932Z" level=info msg="RemoveContainer for \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\"" Feb 9 19:58:05.203218 env[1156]: time="2024-02-09T19:58:05.203190339Z" level=info msg="RemoveContainer for \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\" returns successfully" Feb 9 19:58:05.203551 kubelet[1535]: I0209 19:58:05.203532 1535 scope.go:115] "RemoveContainer" containerID="88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932" Feb 9 19:58:05.204070 env[1156]: time="2024-02-09T19:58:05.204048154Z" level=info msg="RemoveContainer for \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\"" Feb 9 19:58:05.214756 env[1156]: time="2024-02-09T19:58:05.214719554Z" level=info msg="RemoveContainer for \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\" returns successfully" Feb 9 19:58:05.214878 kubelet[1535]: I0209 19:58:05.214864 1535 scope.go:115] "RemoveContainer" containerID="53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886" Feb 9 19:58:05.215482 env[1156]: time="2024-02-09T19:58:05.215463739Z" level=info msg="RemoveContainer for \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\"" Feb 9 19:58:05.222621 env[1156]: time="2024-02-09T19:58:05.222588599Z" level=info msg="RemoveContainer for \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\" returns successfully" Feb 9 19:58:05.222731 kubelet[1535]: I0209 19:58:05.222716 1535 scope.go:115] "RemoveContainer" containerID="d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8" Feb 9 19:58:05.223324 env[1156]: time="2024-02-09T19:58:05.223305169Z" level=info msg="RemoveContainer for \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\"" Feb 9 19:58:05.226140 env[1156]: time="2024-02-09T19:58:05.226118897Z" level=info msg="RemoveContainer for \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\" returns successfully" Feb 9 19:58:05.226224 kubelet[1535]: I0209 19:58:05.226209 1535 scope.go:115] "RemoveContainer" containerID="22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a" Feb 9 19:58:05.226710 env[1156]: time="2024-02-09T19:58:05.226693148Z" level=info msg="RemoveContainer for \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\"" Feb 9 19:58:05.228567 env[1156]: time="2024-02-09T19:58:05.228542516Z" level=info msg="RemoveContainer for \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\" returns successfully" Feb 9 19:58:05.228676 kubelet[1535]: I0209 19:58:05.228659 1535 scope.go:115] "RemoveContainer" containerID="54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0" Feb 9 19:58:05.228845 env[1156]: time="2024-02-09T19:58:05.228792309Z" level=error msg="ContainerStatus for \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\": not found" Feb 9 19:58:05.228938 kubelet[1535]: E0209 19:58:05.228922 1535 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\": not found" containerID="54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0" Feb 9 19:58:05.228980 kubelet[1535]: I0209 19:58:05.228959 1535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0} err="failed to get container status \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"54325f9f745010209a243195f02a5cd2503154716f9f3f71fcaf34310d6ec7b0\": not found" Feb 9 19:58:05.228980 kubelet[1535]: I0209 19:58:05.228970 1535 scope.go:115] "RemoveContainer" containerID="88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932" Feb 9 19:58:05.229094 env[1156]: time="2024-02-09T19:58:05.229062648Z" level=error msg="ContainerStatus for \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\": not found" Feb 9 19:58:05.229139 kubelet[1535]: E0209 19:58:05.229133 1535 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\": not found" containerID="88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932" Feb 9 19:58:05.229171 kubelet[1535]: I0209 19:58:05.229147 1535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932} err="failed to get container status \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\": rpc error: code = NotFound desc = an error occurred when try to find container \"88e34583c4a1e66e2e68908f3328e6ff6ca0173fa7bc00d31112a2b3a24bf932\": not found" Feb 9 19:58:05.229171 kubelet[1535]: I0209 19:58:05.229153 1535 scope.go:115] "RemoveContainer" containerID="53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886" Feb 9 19:58:05.229247 env[1156]: time="2024-02-09T19:58:05.229219852Z" level=error msg="ContainerStatus for \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\": not found" Feb 9 19:58:05.229295 kubelet[1535]: E0209 19:58:05.229286 1535 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\": not found" containerID="53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886" Feb 9 19:58:05.229327 kubelet[1535]: I0209 19:58:05.229299 1535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886} err="failed to get container status \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\": rpc error: code = NotFound desc = an error occurred when try to find container \"53380a580a9b5a152599c6c2f3d41476e2ae42b3e28c7178fc5121060eb45886\": not found" Feb 9 19:58:05.229327 kubelet[1535]: I0209 19:58:05.229304 1535 scope.go:115] "RemoveContainer" containerID="d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8" Feb 9 19:58:05.229403 env[1156]: time="2024-02-09T19:58:05.229375556Z" level=error msg="ContainerStatus for \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\": not found" Feb 9 19:58:05.229465 kubelet[1535]: E0209 19:58:05.229454 1535 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\": not found" containerID="d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8" Feb 9 19:58:05.229500 kubelet[1535]: I0209 19:58:05.229469 1535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8} err="failed to get container status \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d065b0dac56b5234587b97cf0f276dda92d89ca2b2b13322259b62e343c831f8\": not found" Feb 9 19:58:05.229500 kubelet[1535]: I0209 19:58:05.229475 1535 scope.go:115] "RemoveContainer" containerID="22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a" Feb 9 19:58:05.229566 env[1156]: time="2024-02-09T19:58:05.229539379Z" level=error msg="ContainerStatus for \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\": not found" Feb 9 19:58:05.229618 kubelet[1535]: E0209 19:58:05.229608 1535 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\": not found" containerID="22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a" Feb 9 19:58:05.229661 kubelet[1535]: I0209 19:58:05.229621 1535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a} err="failed to get container status \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"22a7229f87af80439e6fa343f27fdca7252269809bc85d58af2a3dc502f9ae4a\": not found" Feb 9 19:58:05.833375 kubelet[1535]: E0209 19:58:05.833350 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:06.071512 kubelet[1535]: I0209 19:58:06.071494 1535 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=947522a3-86b0-4997-be54-94d8502b096e path="/var/lib/kubelet/pods/947522a3-86b0-4997-be54-94d8502b096e/volumes" Feb 9 19:58:06.293372 kubelet[1535]: I0209 19:58:06.293345 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:58:06.293479 kubelet[1535]: E0209 19:58:06.293385 1535 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="947522a3-86b0-4997-be54-94d8502b096e" containerName="apply-sysctl-overwrites" Feb 9 19:58:06.293479 kubelet[1535]: E0209 19:58:06.293392 1535 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="947522a3-86b0-4997-be54-94d8502b096e" containerName="cilium-agent" Feb 9 19:58:06.293479 kubelet[1535]: E0209 19:58:06.293396 1535 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="947522a3-86b0-4997-be54-94d8502b096e" containerName="mount-cgroup" Feb 9 19:58:06.293479 kubelet[1535]: E0209 19:58:06.293400 1535 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="947522a3-86b0-4997-be54-94d8502b096e" containerName="mount-bpf-fs" Feb 9 19:58:06.293479 kubelet[1535]: E0209 19:58:06.293404 1535 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="947522a3-86b0-4997-be54-94d8502b096e" containerName="clean-cilium-state" Feb 9 19:58:06.293479 kubelet[1535]: I0209 19:58:06.293421 1535 memory_manager.go:346] "RemoveStaleState removing state" podUID="947522a3-86b0-4997-be54-94d8502b096e" containerName="cilium-agent" Feb 9 19:58:06.296256 systemd[1]: Created slice kubepods-burstable-poda8426f76_35a4_40e1_8daf_9d3354d92a73.slice. Feb 9 19:58:06.421914 kubelet[1535]: I0209 19:58:06.421890 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-lib-modules\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422120 kubelet[1535]: I0209 19:58:06.422109 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-config-path\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422230 kubelet[1535]: I0209 19:58:06.422221 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-hubble-tls\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422335 kubelet[1535]: I0209 19:58:06.422326 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-clustermesh-secrets\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422462 kubelet[1535]: I0209 19:58:06.422452 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-bpf-maps\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422573 kubelet[1535]: I0209 19:58:06.422564 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-cgroup\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422672 kubelet[1535]: I0209 19:58:06.422663 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cni-path\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422770 kubelet[1535]: I0209 19:58:06.422762 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-net\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422873 kubelet[1535]: I0209 19:58:06.422865 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-run\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.422980 kubelet[1535]: I0209 19:58:06.422970 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-hostproc\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.423098 kubelet[1535]: I0209 19:58:06.423079 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-etc-cni-netd\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.423153 kubelet[1535]: I0209 19:58:06.423114 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-xtables-lock\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.423153 kubelet[1535]: I0209 19:58:06.423141 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-ipsec-secrets\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.423208 kubelet[1535]: I0209 19:58:06.423172 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-kernel\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.423208 kubelet[1535]: I0209 19:58:06.423201 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbwf\" (UniqueName: \"kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-kube-api-access-wvbwf\") pod \"cilium-l5slj\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " pod="kube-system/cilium-l5slj" Feb 9 19:58:06.641822 kubelet[1535]: I0209 19:58:06.641739 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:58:06.646222 systemd[1]: Created slice kubepods-besteffort-podacd34a32_8b43_40a0_a0cd_1f9ac85224f2.slice. Feb 9 19:58:06.725598 kubelet[1535]: I0209 19:58:06.725544 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9f2\" (UniqueName: \"kubernetes.io/projected/acd34a32-8b43-40a0-a0cd-1f9ac85224f2-kube-api-access-8n9f2\") pod \"cilium-operator-f59cbd8c6-g4c5d\" (UID: \"acd34a32-8b43-40a0-a0cd-1f9ac85224f2\") " pod="kube-system/cilium-operator-f59cbd8c6-g4c5d" Feb 9 19:58:06.725598 kubelet[1535]: I0209 19:58:06.725606 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd34a32-8b43-40a0-a0cd-1f9ac85224f2-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-g4c5d\" (UID: \"acd34a32-8b43-40a0-a0cd-1f9ac85224f2\") " pod="kube-system/cilium-operator-f59cbd8c6-g4c5d" Feb 9 19:58:06.836586 kubelet[1535]: E0209 19:58:06.836564 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:06.908771 env[1156]: time="2024-02-09T19:58:06.908690700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l5slj,Uid:a8426f76-35a4-40e1-8daf-9d3354d92a73,Namespace:kube-system,Attempt:0,}" Feb 9 19:58:06.949177 env[1156]: time="2024-02-09T19:58:06.949008639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-g4c5d,Uid:acd34a32-8b43-40a0-a0cd-1f9ac85224f2,Namespace:kube-system,Attempt:0,}" Feb 9 19:58:07.016534 env[1156]: time="2024-02-09T19:58:07.016495098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:58:07.016675 env[1156]: time="2024-02-09T19:58:07.016658515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:58:07.016752 env[1156]: time="2024-02-09T19:58:07.016738598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:58:07.016926 env[1156]: time="2024-02-09T19:58:07.016906390Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3 pid=3197 runtime=io.containerd.runc.v2 Feb 9 19:58:07.029874 systemd[1]: Started cri-containerd-5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3.scope. Feb 9 19:58:07.046009 env[1156]: time="2024-02-09T19:58:07.045983149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l5slj,Uid:a8426f76-35a4-40e1-8daf-9d3354d92a73,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\"" Feb 9 19:58:07.047674 env[1156]: time="2024-02-09T19:58:07.047653591Z" level=info msg="CreateContainer within sandbox \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:58:07.058606 env[1156]: time="2024-02-09T19:58:07.058559284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:58:07.058606 env[1156]: time="2024-02-09T19:58:07.058582898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:58:07.058757 env[1156]: time="2024-02-09T19:58:07.058590059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:58:07.059037 env[1156]: time="2024-02-09T19:58:07.058847754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/459ac24f056e9d9282b923faf90858e990fd88774f0b2da799ca867acb9d3b59 pid=3240 runtime=io.containerd.runc.v2 Feb 9 19:58:07.067032 systemd[1]: Started cri-containerd-459ac24f056e9d9282b923faf90858e990fd88774f0b2da799ca867acb9d3b59.scope. Feb 9 19:58:07.096030 env[1156]: time="2024-02-09T19:58:07.095997550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-g4c5d,Uid:acd34a32-8b43-40a0-a0cd-1f9ac85224f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"459ac24f056e9d9282b923faf90858e990fd88774f0b2da799ca867acb9d3b59\"" Feb 9 19:58:07.096859 env[1156]: time="2024-02-09T19:58:07.096834120Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:58:07.140700 env[1156]: time="2024-02-09T19:58:07.140655940Z" level=info msg="CreateContainer within sandbox \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\"" Feb 9 19:58:07.141171 env[1156]: time="2024-02-09T19:58:07.141157533Z" level=info msg="StartContainer for \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\"" Feb 9 19:58:07.150932 systemd[1]: Started cri-containerd-d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7.scope. Feb 9 19:58:07.157932 systemd[1]: cri-containerd-d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7.scope: Deactivated successfully. Feb 9 19:58:07.158109 systemd[1]: Stopped cri-containerd-d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7.scope. Feb 9 19:58:07.197012 env[1156]: time="2024-02-09T19:58:07.196925848Z" level=info msg="shim disconnected" id=d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7 Feb 9 19:58:07.197012 env[1156]: time="2024-02-09T19:58:07.196959045Z" level=warning msg="cleaning up after shim disconnected" id=d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7 namespace=k8s.io Feb 9 19:58:07.197012 env[1156]: time="2024-02-09T19:58:07.196965670Z" level=info msg="cleaning up dead shim" Feb 9 19:58:07.202404 env[1156]: time="2024-02-09T19:58:07.202366882Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:58:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:58:07.202609 env[1156]: time="2024-02-09T19:58:07.202542215Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Feb 9 19:58:07.203509 env[1156]: time="2024-02-09T19:58:07.203485509Z" level=error msg="Failed to pipe stdout of container \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\"" error="reading from a closed fifo" Feb 9 19:58:07.203583 env[1156]: time="2024-02-09T19:58:07.203568196Z" level=error msg="Failed to pipe stderr of container \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\"" error="reading from a closed fifo" Feb 9 19:58:07.213471 env[1156]: time="2024-02-09T19:58:07.213407587Z" level=error msg="StartContainer for \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:58:07.213601 kubelet[1535]: E0209 19:58:07.213588 1535 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7" Feb 9 19:58:07.213845 kubelet[1535]: E0209 19:58:07.213676 1535 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:58:07.213845 kubelet[1535]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:58:07.213845 kubelet[1535]: rm /hostbin/cilium-mount Feb 9 19:58:07.213845 kubelet[1535]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wvbwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-l5slj_kube-system(a8426f76-35a4-40e1-8daf-9d3354d92a73): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:58:07.214009 kubelet[1535]: E0209 19:58:07.213709 1535 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-l5slj" podUID=a8426f76-35a4-40e1-8daf-9d3354d92a73 Feb 9 19:58:07.781535 kubelet[1535]: E0209 19:58:07.781486 1535 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:07.837098 kubelet[1535]: E0209 19:58:07.837006 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:07.878770 kubelet[1535]: E0209 19:58:07.878747 1535 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:58:08.191369 env[1156]: time="2024-02-09T19:58:08.191337499Z" level=info msg="StopPodSandbox for \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\"" Feb 9 19:58:08.191788 env[1156]: time="2024-02-09T19:58:08.191772783Z" level=info msg="Container to stop \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:58:08.193009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3-shm.mount: Deactivated successfully. Feb 9 19:58:08.198886 systemd[1]: cri-containerd-5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3.scope: Deactivated successfully. Feb 9 19:58:08.211227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3-rootfs.mount: Deactivated successfully. Feb 9 19:58:08.307954 env[1156]: time="2024-02-09T19:58:08.307912635Z" level=info msg="shim disconnected" id=5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3 Feb 9 19:58:08.308253 env[1156]: time="2024-02-09T19:58:08.308238257Z" level=warning msg="cleaning up after shim disconnected" id=5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3 namespace=k8s.io Feb 9 19:58:08.308324 env[1156]: time="2024-02-09T19:58:08.308314492Z" level=info msg="cleaning up dead shim" Feb 9 19:58:08.313566 env[1156]: time="2024-02-09T19:58:08.313538754Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" Feb 9 19:58:08.313744 env[1156]: time="2024-02-09T19:58:08.313722908Z" level=info msg="TearDown network for sandbox \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\" successfully" Feb 9 19:58:08.313787 env[1156]: time="2024-02-09T19:58:08.313740687Z" level=info msg="StopPodSandbox for \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\" returns successfully" Feb 9 19:58:08.334594 kubelet[1535]: I0209 19:58:08.334566 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-config-path\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334742 kubelet[1535]: I0209 19:58:08.334677 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-bpf-maps\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334742 kubelet[1535]: I0209 19:58:08.334693 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-cgroup\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334742 kubelet[1535]: I0209 19:58:08.334705 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-net\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334742 kubelet[1535]: I0209 19:58:08.334715 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-run\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334742 kubelet[1535]: I0209 19:58:08.334725 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-lib-modules\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334855 kubelet[1535]: I0209 19:58:08.334760 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-hostproc\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334855 kubelet[1535]: I0209 19:58:08.334778 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvbwf\" (UniqueName: \"kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-kube-api-access-wvbwf\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334855 kubelet[1535]: I0209 19:58:08.334796 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-hubble-tls\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334855 kubelet[1535]: I0209 19:58:08.334808 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-clustermesh-secrets\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334855 kubelet[1535]: I0209 19:58:08.334827 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-ipsec-secrets\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334855 kubelet[1535]: I0209 19:58:08.334841 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cni-path\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334969 kubelet[1535]: I0209 19:58:08.334851 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-etc-cni-netd\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334969 kubelet[1535]: I0209 19:58:08.334861 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-xtables-lock\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334969 kubelet[1535]: I0209 19:58:08.334870 1535 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-kernel\") pod \"a8426f76-35a4-40e1-8daf-9d3354d92a73\" (UID: \"a8426f76-35a4-40e1-8daf-9d3354d92a73\") " Feb 9 19:58:08.334969 kubelet[1535]: I0209 19:58:08.334902 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.334969 kubelet[1535]: I0209 19:58:08.334919 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335062 kubelet[1535]: I0209 19:58:08.334929 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335062 kubelet[1535]: I0209 19:58:08.334939 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335062 kubelet[1535]: I0209 19:58:08.334949 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335062 kubelet[1535]: I0209 19:58:08.334959 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335062 kubelet[1535]: I0209 19:58:08.334968 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-hostproc" (OuterVolumeSpecName: "hostproc") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335313 kubelet[1535]: I0209 19:58:08.335230 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cni-path" (OuterVolumeSpecName: "cni-path") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335313 kubelet[1535]: I0209 19:58:08.335251 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.335313 kubelet[1535]: I0209 19:58:08.335262 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:58:08.347929 kubelet[1535]: W0209 19:58:08.346865 1535 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a8426f76-35a4-40e1-8daf-9d3354d92a73/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:58:08.347929 kubelet[1535]: I0209 19:58:08.347886 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:58:08.349028 systemd[1]: var-lib-kubelet-pods-a8426f76\x2d35a4\x2d40e1\x2d8daf\x2d9d3354d92a73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwvbwf.mount: Deactivated successfully. Feb 9 19:58:08.349730 kubelet[1535]: I0209 19:58:08.349716 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-kube-api-access-wvbwf" (OuterVolumeSpecName: "kube-api-access-wvbwf") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "kube-api-access-wvbwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:58:08.351537 systemd[1]: var-lib-kubelet-pods-a8426f76\x2d35a4\x2d40e1\x2d8daf\x2d9d3354d92a73-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:58:08.352252 kubelet[1535]: I0209 19:58:08.352238 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:58:08.352312 kubelet[1535]: I0209 19:58:08.352275 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:58:08.353521 kubelet[1535]: I0209 19:58:08.353506 1535 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a8426f76-35a4-40e1-8daf-9d3354d92a73" (UID: "a8426f76-35a4-40e1-8daf-9d3354d92a73"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:58:08.435219 kubelet[1535]: I0209 19:58:08.435197 1535 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-config-path\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435349 kubelet[1535]: I0209 19:58:08.435341 1535 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-bpf-maps\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435408 kubelet[1535]: I0209 19:58:08.435401 1535 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-cgroup\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435483 kubelet[1535]: I0209 19:58:08.435476 1535 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-net\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435536 kubelet[1535]: I0209 19:58:08.435529 1535 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-run\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435584 kubelet[1535]: I0209 19:58:08.435578 1535 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-lib-modules\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435634 kubelet[1535]: I0209 19:58:08.435627 1535 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-hostproc\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435683 kubelet[1535]: I0209 19:58:08.435677 1535 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-wvbwf\" (UniqueName: \"kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-kube-api-access-wvbwf\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435732 kubelet[1535]: I0209 19:58:08.435724 1535 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8426f76-35a4-40e1-8daf-9d3354d92a73-hubble-tls\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435780 kubelet[1535]: I0209 19:58:08.435773 1535 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-clustermesh-secrets\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435828 kubelet[1535]: I0209 19:58:08.435822 1535 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8426f76-35a4-40e1-8daf-9d3354d92a73-cilium-ipsec-secrets\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435877 kubelet[1535]: I0209 19:58:08.435870 1535 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-cni-path\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435924 kubelet[1535]: I0209 19:58:08.435918 1535 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-etc-cni-netd\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.435996 kubelet[1535]: I0209 19:58:08.435989 1535 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-xtables-lock\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.436044 kubelet[1535]: I0209 19:58:08.436038 1535 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8426f76-35a4-40e1-8daf-9d3354d92a73-host-proc-sys-kernel\") on node \"10.67.124.136\" DevicePath \"\"" Feb 9 19:58:08.528458 systemd[1]: var-lib-kubelet-pods-a8426f76\x2d35a4\x2d40e1\x2d8daf\x2d9d3354d92a73-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:58:08.528552 systemd[1]: var-lib-kubelet-pods-a8426f76\x2d35a4\x2d40e1\x2d8daf\x2d9d3354d92a73-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:58:08.837956 kubelet[1535]: E0209 19:58:08.837931 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:09.193067 kubelet[1535]: I0209 19:58:09.192980 1535 scope.go:115] "RemoveContainer" containerID="d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7" Feb 9 19:58:09.195291 env[1156]: time="2024-02-09T19:58:09.194959334Z" level=info msg="RemoveContainer for \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\"" Feb 9 19:58:09.195223 systemd[1]: Removed slice kubepods-burstable-poda8426f76_35a4_40e1_8daf_9d3354d92a73.slice. Feb 9 19:58:09.196985 env[1156]: time="2024-02-09T19:58:09.196963053Z" level=info msg="RemoveContainer for \"d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7\" returns successfully" Feb 9 19:58:09.213905 kubelet[1535]: I0209 19:58:09.213883 1535 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:58:09.214069 kubelet[1535]: E0209 19:58:09.214061 1535 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8426f76-35a4-40e1-8daf-9d3354d92a73" containerName="mount-cgroup" Feb 9 19:58:09.214145 kubelet[1535]: I0209 19:58:09.214136 1535 memory_manager.go:346] "RemoveStaleState removing state" podUID="a8426f76-35a4-40e1-8daf-9d3354d92a73" containerName="mount-cgroup" Feb 9 19:58:09.218052 systemd[1]: Created slice kubepods-burstable-pode038386b_dc4d_4e76_a306_e5160870bb0c.slice. Feb 9 19:58:09.240164 kubelet[1535]: I0209 19:58:09.240145 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e038386b-dc4d-4e76-a306-e5160870bb0c-clustermesh-secrets\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240314 kubelet[1535]: I0209 19:58:09.240306 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e038386b-dc4d-4e76-a306-e5160870bb0c-cilium-config-path\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240422 kubelet[1535]: I0209 19:58:09.240414 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-cni-path\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240506 kubelet[1535]: I0209 19:58:09.240499 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e038386b-dc4d-4e76-a306-e5160870bb0c-hubble-tls\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240578 kubelet[1535]: I0209 19:58:09.240572 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc65h\" (UniqueName: \"kubernetes.io/projected/e038386b-dc4d-4e76-a306-e5160870bb0c-kube-api-access-jc65h\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240649 kubelet[1535]: I0209 19:58:09.240642 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-bpf-maps\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240734 kubelet[1535]: I0209 19:58:09.240719 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-hostproc\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240791 kubelet[1535]: I0209 19:58:09.240743 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-host-proc-sys-net\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240791 kubelet[1535]: I0209 19:58:09.240756 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-etc-cni-netd\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240791 kubelet[1535]: I0209 19:58:09.240767 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-lib-modules\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240791 kubelet[1535]: I0209 19:58:09.240778 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-xtables-lock\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240791 kubelet[1535]: I0209 19:58:09.240789 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-cilium-run\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240901 kubelet[1535]: I0209 19:58:09.240800 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-cilium-cgroup\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240901 kubelet[1535]: I0209 19:58:09.240813 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e038386b-dc4d-4e76-a306-e5160870bb0c-cilium-ipsec-secrets\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.240901 kubelet[1535]: I0209 19:58:09.240824 1535 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e038386b-dc4d-4e76-a306-e5160870bb0c-host-proc-sys-kernel\") pod \"cilium-sw9wm\" (UID: \"e038386b-dc4d-4e76-a306-e5160870bb0c\") " pod="kube-system/cilium-sw9wm" Feb 9 19:58:09.254077 env[1156]: time="2024-02-09T19:58:09.254053742Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:58:09.255026 env[1156]: time="2024-02-09T19:58:09.255005340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:58:09.258522 env[1156]: time="2024-02-09T19:58:09.258498173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:58:09.258716 env[1156]: time="2024-02-09T19:58:09.258701205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:58:09.260757 env[1156]: time="2024-02-09T19:58:09.260732462Z" level=info msg="CreateContainer within sandbox \"459ac24f056e9d9282b923faf90858e990fd88774f0b2da799ca867acb9d3b59\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:58:09.265977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756162611.mount: Deactivated successfully. Feb 9 19:58:09.268358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054536512.mount: Deactivated successfully. Feb 9 19:58:09.281869 env[1156]: time="2024-02-09T19:58:09.281834023Z" level=info msg="CreateContainer within sandbox \"459ac24f056e9d9282b923faf90858e990fd88774f0b2da799ca867acb9d3b59\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d74228953e3f0ee75f1ec1a3ea29efa6c4c8ae2014a48e9464e17a0de3069fc9\"" Feb 9 19:58:09.282400 env[1156]: time="2024-02-09T19:58:09.282375954Z" level=info msg="StartContainer for \"d74228953e3f0ee75f1ec1a3ea29efa6c4c8ae2014a48e9464e17a0de3069fc9\"" Feb 9 19:58:09.292838 systemd[1]: Started cri-containerd-d74228953e3f0ee75f1ec1a3ea29efa6c4c8ae2014a48e9464e17a0de3069fc9.scope. Feb 9 19:58:09.318390 env[1156]: time="2024-02-09T19:58:09.318362401Z" level=info msg="StartContainer for \"d74228953e3f0ee75f1ec1a3ea29efa6c4c8ae2014a48e9464e17a0de3069fc9\" returns successfully" Feb 9 19:58:09.525080 env[1156]: time="2024-02-09T19:58:09.525014555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw9wm,Uid:e038386b-dc4d-4e76-a306-e5160870bb0c,Namespace:kube-system,Attempt:0,}" Feb 9 19:58:09.538483 env[1156]: time="2024-02-09T19:58:09.538421964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:58:09.538597 env[1156]: time="2024-02-09T19:58:09.538476749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:58:09.538597 env[1156]: time="2024-02-09T19:58:09.538486114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:58:09.538597 env[1156]: time="2024-02-09T19:58:09.538558193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45 pid=3395 runtime=io.containerd.runc.v2 Feb 9 19:58:09.549708 systemd[1]: Started cri-containerd-28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45.scope. Feb 9 19:58:09.566001 env[1156]: time="2024-02-09T19:58:09.565970072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw9wm,Uid:e038386b-dc4d-4e76-a306-e5160870bb0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\"" Feb 9 19:58:09.567528 env[1156]: time="2024-02-09T19:58:09.567508751Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:58:09.590279 env[1156]: time="2024-02-09T19:58:09.590247815Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7\"" Feb 9 19:58:09.590620 env[1156]: time="2024-02-09T19:58:09.590600290Z" level=info msg="StartContainer for \"9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7\"" Feb 9 19:58:09.600843 systemd[1]: Started cri-containerd-9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7.scope. Feb 9 19:58:09.623194 env[1156]: time="2024-02-09T19:58:09.623170328Z" level=info msg="StartContainer for \"9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7\" returns successfully" Feb 9 19:58:09.639670 systemd[1]: cri-containerd-9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7.scope: Deactivated successfully. Feb 9 19:58:09.838351 kubelet[1535]: E0209 19:58:09.838309 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:10.053604 env[1156]: time="2024-02-09T19:58:10.053543797Z" level=info msg="shim disconnected" id=9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7 Feb 9 19:58:10.053604 env[1156]: time="2024-02-09T19:58:10.053577051Z" level=warning msg="cleaning up after shim disconnected" id=9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7 namespace=k8s.io Feb 9 19:58:10.053604 env[1156]: time="2024-02-09T19:58:10.053583847Z" level=info msg="cleaning up dead shim" Feb 9 19:58:10.058263 env[1156]: time="2024-02-09T19:58:10.058238297Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3479 runtime=io.containerd.runc.v2\n" Feb 9 19:58:10.070338 env[1156]: time="2024-02-09T19:58:10.070314964Z" level=info msg="StopPodSandbox for \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\"" Feb 9 19:58:10.070568 env[1156]: time="2024-02-09T19:58:10.070463734Z" level=info msg="TearDown network for sandbox \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\" successfully" Feb 9 19:58:10.070623 env[1156]: time="2024-02-09T19:58:10.070611581Z" level=info msg="StopPodSandbox for \"5b59679fa48a618e9f8c87f1ec96e4b9354412b8472f9b952df3ca137ecb9af3\" returns successfully" Feb 9 19:58:10.070889 kubelet[1535]: I0209 19:58:10.070873 1535 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a8426f76-35a4-40e1-8daf-9d3354d92a73 path="/var/lib/kubelet/pods/a8426f76-35a4-40e1-8daf-9d3354d92a73/volumes" Feb 9 19:58:10.197192 env[1156]: time="2024-02-09T19:58:10.196820420Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:58:10.226647 kubelet[1535]: I0209 19:58:10.226623 1535 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-g4c5d" podStartSLOduration=-9.223372032628218e+09 pod.CreationTimestamp="2024-02-09 19:58:06 +0000 UTC" firstStartedPulling="2024-02-09 19:58:07.096616717 +0000 UTC m=+79.831645022" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:58:10.22613005 +0000 UTC m=+82.961158366" watchObservedRunningTime="2024-02-09 19:58:10.226557774 +0000 UTC m=+82.961586084" Feb 9 19:58:10.237611 env[1156]: time="2024-02-09T19:58:10.237548027Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e\"" Feb 9 19:58:10.238302 env[1156]: time="2024-02-09T19:58:10.238277192Z" level=info msg="StartContainer for \"e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e\"" Feb 9 19:58:10.250901 systemd[1]: Started cri-containerd-e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e.scope. Feb 9 19:58:10.272037 env[1156]: time="2024-02-09T19:58:10.272000375Z" level=info msg="StartContainer for \"e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e\" returns successfully" Feb 9 19:58:10.284258 systemd[1]: cri-containerd-e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e.scope: Deactivated successfully. Feb 9 19:58:10.301975 kubelet[1535]: W0209 19:58:10.301948 1535 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8426f76_35a4_40e1_8daf_9d3354d92a73.slice/cri-containerd-d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7.scope WatchSource:0}: container "d55e6dc2e51753b41146c666fe357490c28f4eb4d8f0f30a66f428a1dcdd6ef7" in namespace "k8s.io": not found Feb 9 19:58:10.309060 env[1156]: time="2024-02-09T19:58:10.309031660Z" level=info msg="shim disconnected" id=e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e Feb 9 19:58:10.309173 env[1156]: time="2024-02-09T19:58:10.309161430Z" level=warning msg="cleaning up after shim disconnected" id=e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e namespace=k8s.io Feb 9 19:58:10.309219 env[1156]: time="2024-02-09T19:58:10.309209976Z" level=info msg="cleaning up dead shim" Feb 9 19:58:10.309479 env[1156]: time="2024-02-09T19:58:10.309418526Z" level=error msg="collecting metrics for e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e" error="ttrpc: closed: unknown" Feb 9 19:58:10.314214 env[1156]: time="2024-02-09T19:58:10.314194431Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3543 runtime=io.containerd.runc.v2\n" Feb 9 19:58:10.528323 systemd[1]: run-containerd-runc-k8s.io-28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45-runc.1VHc3V.mount: Deactivated successfully. Feb 9 19:58:10.839212 kubelet[1535]: E0209 19:58:10.839174 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:11.202461 env[1156]: time="2024-02-09T19:58:11.202382303Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:58:11.210279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070969009.mount: Deactivated successfully. Feb 9 19:58:11.213784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137604131.mount: Deactivated successfully. Feb 9 19:58:11.216047 env[1156]: time="2024-02-09T19:58:11.216015316Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7\"" Feb 9 19:58:11.216523 env[1156]: time="2024-02-09T19:58:11.216509106Z" level=info msg="StartContainer for \"eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7\"" Feb 9 19:58:11.227552 systemd[1]: Started cri-containerd-eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7.scope. Feb 9 19:58:11.256470 env[1156]: time="2024-02-09T19:58:11.256440022Z" level=info msg="StartContainer for \"eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7\" returns successfully" Feb 9 19:58:11.276999 systemd[1]: cri-containerd-eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7.scope: Deactivated successfully. Feb 9 19:58:11.310115 env[1156]: time="2024-02-09T19:58:11.310081122Z" level=info msg="shim disconnected" id=eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7 Feb 9 19:58:11.310115 env[1156]: time="2024-02-09T19:58:11.310110261Z" level=warning msg="cleaning up after shim disconnected" id=eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7 namespace=k8s.io Feb 9 19:58:11.310115 env[1156]: time="2024-02-09T19:58:11.310116773Z" level=info msg="cleaning up dead shim" Feb 9 19:58:11.315139 env[1156]: time="2024-02-09T19:58:11.315117797Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3605 runtime=io.containerd.runc.v2\n" Feb 9 19:58:11.750455 kubelet[1535]: I0209 19:58:11.750102 1535 setters.go:548] "Node became not ready" node="10.67.124.136" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:58:11.750056297 +0000 UTC m=+84.485084602 LastTransitionTime:2024-02-09 19:58:11.750056297 +0000 UTC m=+84.485084602 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:58:11.839301 kubelet[1535]: E0209 19:58:11.839279 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:12.204622 env[1156]: time="2024-02-09T19:58:12.204595486Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:58:12.212154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170937681.mount: Deactivated successfully. Feb 9 19:58:12.216516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870372292.mount: Deactivated successfully. Feb 9 19:58:12.218663 env[1156]: time="2024-02-09T19:58:12.218636200Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640\"" Feb 9 19:58:12.219097 env[1156]: time="2024-02-09T19:58:12.219060823Z" level=info msg="StartContainer for \"ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640\"" Feb 9 19:58:12.229488 systemd[1]: Started cri-containerd-ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640.scope. Feb 9 19:58:12.248395 systemd[1]: cri-containerd-ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640.scope: Deactivated successfully. Feb 9 19:58:12.249650 env[1156]: time="2024-02-09T19:58:12.249627719Z" level=info msg="StartContainer for \"ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640\" returns successfully" Feb 9 19:58:12.262745 env[1156]: time="2024-02-09T19:58:12.262717473Z" level=info msg="shim disconnected" id=ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640 Feb 9 19:58:12.262922 env[1156]: time="2024-02-09T19:58:12.262910433Z" level=warning msg="cleaning up after shim disconnected" id=ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640 namespace=k8s.io Feb 9 19:58:12.262978 env[1156]: time="2024-02-09T19:58:12.262968158Z" level=info msg="cleaning up dead shim" Feb 9 19:58:12.268937 env[1156]: time="2024-02-09T19:58:12.268908617Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:58:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3661 runtime=io.containerd.runc.v2\n" Feb 9 19:58:12.839427 kubelet[1535]: E0209 19:58:12.839396 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:12.880043 kubelet[1535]: E0209 19:58:12.880020 1535 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:58:13.208631 env[1156]: time="2024-02-09T19:58:13.208533852Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:58:13.267456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537030660.mount: Deactivated successfully. Feb 9 19:58:13.270169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201476360.mount: Deactivated successfully. Feb 9 19:58:13.291866 env[1156]: time="2024-02-09T19:58:13.291832563Z" level=info msg="CreateContainer within sandbox \"28dad02840e166e9e6ee418428668d1ae1a8f4094419ed054bdaa7d36d627f45\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6\"" Feb 9 19:58:13.292309 env[1156]: time="2024-02-09T19:58:13.292295118Z" level=info msg="StartContainer for \"376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6\"" Feb 9 19:58:13.303491 systemd[1]: Started cri-containerd-376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6.scope. Feb 9 19:58:13.344419 env[1156]: time="2024-02-09T19:58:13.344380695Z" level=info msg="StartContainer for \"376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6\" returns successfully" Feb 9 19:58:13.412767 kubelet[1535]: W0209 19:58:13.411648 1535 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode038386b_dc4d_4e76_a306_e5160870bb0c.slice/cri-containerd-9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7.scope WatchSource:0}: task 9978787fa2acfe9cc83a9b5b0cf489b1d5edaf51b2cfbdb6f69042ea30f367d7 not found: not found Feb 9 19:58:13.839775 kubelet[1535]: E0209 19:58:13.839738 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:14.218623 kubelet[1535]: I0209 19:58:14.218532 1535 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-sw9wm" podStartSLOduration=5.218486998 pod.CreationTimestamp="2024-02-09 19:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:58:14.21837038 +0000 UTC m=+86.953398697" watchObservedRunningTime="2024-02-09 19:58:14.218486998 +0000 UTC m=+86.953515318" Feb 9 19:58:14.716450 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:58:14.841033 kubelet[1535]: E0209 19:58:14.841001 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:15.069751 systemd[1]: run-containerd-runc-k8s.io-376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6-runc.sNLhFn.mount: Deactivated successfully. Feb 9 19:58:15.161176 kubelet[1535]: E0209 19:58:15.161147 1535 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35314->127.0.0.1:46745: write tcp 127.0.0.1:35314->127.0.0.1:46745: write: connection reset by peer Feb 9 19:58:15.842149 kubelet[1535]: E0209 19:58:15.842106 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:16.518650 kubelet[1535]: W0209 19:58:16.518621 1535 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode038386b_dc4d_4e76_a306_e5160870bb0c.slice/cri-containerd-e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e.scope WatchSource:0}: task e849e7e70d5691eb7cbcdb44eebb2edb4357f20dabac32713cb040113b07571e not found: not found Feb 9 19:58:16.843007 kubelet[1535]: E0209 19:58:16.842983 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:16.997829 systemd-networkd[1065]: lxc_health: Link UP Feb 9 19:58:17.019518 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:58:17.019956 systemd-networkd[1065]: lxc_health: Gained carrier Feb 9 19:58:17.843719 kubelet[1535]: E0209 19:58:17.843691 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:18.757580 systemd-networkd[1065]: lxc_health: Gained IPv6LL Feb 9 19:58:18.843814 kubelet[1535]: E0209 19:58:18.843788 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:19.624232 kubelet[1535]: W0209 19:58:19.624205 1535 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode038386b_dc4d_4e76_a306_e5160870bb0c.slice/cri-containerd-eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7.scope WatchSource:0}: task eac577ef341a627cdd1c4ac8479a848e3f8b431cc07f36824318fdab7b960df7 not found: not found Feb 9 19:58:19.661530 systemd[1]: run-containerd-runc-k8s.io-376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6-runc.7YjEyf.mount: Deactivated successfully. Feb 9 19:58:19.845053 kubelet[1535]: E0209 19:58:19.844949 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:20.845580 kubelet[1535]: E0209 19:58:20.845550 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:21.800803 systemd[1]: run-containerd-runc-k8s.io-376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6-runc.YLCU6N.mount: Deactivated successfully. Feb 9 19:58:21.831912 kubelet[1535]: E0209 19:58:21.831871 1535 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56846->127.0.0.1:46745: write tcp 127.0.0.1:56846->127.0.0.1:46745: write: broken pipe Feb 9 19:58:21.846156 kubelet[1535]: E0209 19:58:21.846128 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:22.732558 kubelet[1535]: W0209 19:58:22.732525 1535 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode038386b_dc4d_4e76_a306_e5160870bb0c.slice/cri-containerd-ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640.scope WatchSource:0}: task ee5b19cc344b833facd1df7e77a055f221e7988e2aa0ae0e80c0ad70b2e90640 not found: not found Feb 9 19:58:22.847186 kubelet[1535]: E0209 19:58:22.847156 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:23.848212 kubelet[1535]: E0209 19:58:23.848188 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:23.873937 systemd[1]: run-containerd-runc-k8s.io-376ff8a1d57c804b6d8816723cbd9c0d853b1e73fac9860d97cc74712573d7f6-runc.7Xvu41.mount: Deactivated successfully. Feb 9 19:58:24.848821 kubelet[1535]: E0209 19:58:24.848787 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:58:25.849126 kubelet[1535]: E0209 19:58:25.849091 1535 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"