Dec 13 04:02:22.563830 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 04:02:22.563844 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:02:22.563851 kernel: BIOS-provided physical RAM map: Dec 13 04:02:22.563855 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 04:02:22.563858 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 04:02:22.563862 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 04:02:22.563867 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 04:02:22.563871 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 04:02:22.563875 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cbfff] usable Dec 13 04:02:22.563878 kernel: BIOS-e820: [mem 0x00000000819cc000-0x00000000819ccfff] ACPI NVS Dec 13 04:02:22.563883 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] reserved Dec 13 04:02:22.563887 kernel: BIOS-e820: [mem 0x00000000819ce000-0x000000008afccfff] usable Dec 13 04:02:22.563891 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Dec 13 04:02:22.563895 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Dec 13 04:02:22.563900 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Dec 13 04:02:22.563905 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Dec 13 04:02:22.563909 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Dec 13 04:02:22.563913 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Dec 13 04:02:22.563918 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 04:02:22.563922 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 04:02:22.563926 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 04:02:22.563930 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 04:02:22.563934 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 04:02:22.563938 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Dec 13 04:02:22.563942 kernel: NX (Execute Disable) protection: active Dec 13 04:02:22.563947 kernel: SMBIOS 3.2.1 present. Dec 13 04:02:22.563952 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Dec 13 04:02:22.563956 kernel: tsc: Detected 3400.000 MHz processor Dec 13 04:02:22.563960 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 04:02:22.563964 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 04:02:22.563969 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 04:02:22.563974 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Dec 13 04:02:22.563978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 04:02:22.563982 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Dec 13 04:02:22.563987 kernel: Using GB pages for direct mapping Dec 13 04:02:22.563991 kernel: ACPI: Early table checksum verification disabled Dec 13 04:02:22.563996 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 04:02:22.564001 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 04:02:22.564005 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Dec 13 04:02:22.564009 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 04:02:22.564016 kernel: ACPI: FACS 0x000000008C66CF80 000040 Dec 13 04:02:22.564020 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Dec 13 04:02:22.564026 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Dec 13 04:02:22.564031 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 04:02:22.564036 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 04:02:22.564041 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 04:02:22.564045 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 04:02:22.564050 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 04:02:22.564055 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 04:02:22.564059 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 04:02:22.564065 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 04:02:22.564069 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 04:02:22.564074 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 04:02:22.564079 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 04:02:22.564084 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 04:02:22.564088 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 04:02:22.564093 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 04:02:22.564098 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 04:02:22.564103 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 04:02:22.564108 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Dec 13 04:02:22.564113 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 04:02:22.564117 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 04:02:22.564122 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 04:02:22.564127 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Dec 13 04:02:22.564132 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 04:02:22.564136 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 04:02:22.564141 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 04:02:22.564146 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 04:02:22.564151 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 04:02:22.564156 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Dec 13 04:02:22.564161 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Dec 13 04:02:22.564165 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Dec 13 04:02:22.564170 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Dec 13 04:02:22.564175 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Dec 13 04:02:22.564180 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Dec 13 04:02:22.564185 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Dec 13 04:02:22.564190 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Dec 13 04:02:22.564194 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Dec 13 04:02:22.564199 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Dec 13 04:02:22.564204 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Dec 13 04:02:22.564208 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Dec 13 04:02:22.564213 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Dec 13 04:02:22.564218 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Dec 13 04:02:22.564222 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Dec 13 04:02:22.564228 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Dec 13 04:02:22.564232 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Dec 13 04:02:22.564237 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Dec 13 04:02:22.564242 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Dec 13 04:02:22.564247 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Dec 13 04:02:22.564251 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Dec 13 04:02:22.564256 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Dec 13 04:02:22.564261 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Dec 13 04:02:22.564265 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Dec 13 04:02:22.564271 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Dec 13 04:02:22.564275 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Dec 13 04:02:22.564280 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Dec 13 04:02:22.564285 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Dec 13 04:02:22.564289 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Dec 13 04:02:22.564294 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Dec 13 04:02:22.564299 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Dec 13 04:02:22.564304 kernel: No NUMA configuration found Dec 13 04:02:22.564308 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Dec 13 04:02:22.564314 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Dec 13 04:02:22.564319 kernel: Zone ranges: Dec 13 04:02:22.564323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 04:02:22.564328 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 04:02:22.564333 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Dec 13 04:02:22.564338 kernel: Movable zone start for each node Dec 13 04:02:22.564342 kernel: Early memory node ranges Dec 13 04:02:22.564347 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 04:02:22.564352 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 04:02:22.564356 kernel: node 0: [mem 0x0000000040400000-0x00000000819cbfff] Dec 13 04:02:22.564362 kernel: node 0: [mem 0x00000000819ce000-0x000000008afccfff] Dec 13 04:02:22.564367 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Dec 13 04:02:22.564371 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Dec 13 04:02:22.564376 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Dec 13 04:02:22.564381 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Dec 13 04:02:22.564386 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 04:02:22.564394 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 04:02:22.564399 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 04:02:22.564404 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 04:02:22.564409 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Dec 13 04:02:22.564415 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Dec 13 04:02:22.564420 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Dec 13 04:02:22.564425 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Dec 13 04:02:22.564430 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 04:02:22.564435 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 04:02:22.564443 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 04:02:22.564448 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 04:02:22.564454 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 04:02:22.564459 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 04:02:22.564464 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 04:02:22.564469 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 04:02:22.564474 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 04:02:22.564479 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 04:02:22.564484 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 04:02:22.564489 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 04:02:22.564494 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 04:02:22.564500 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 04:02:22.564505 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 04:02:22.564510 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 04:02:22.564515 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 04:02:22.564520 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 04:02:22.564525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 04:02:22.564530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 04:02:22.564535 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 04:02:22.564540 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 04:02:22.564546 kernel: TSC deadline timer available Dec 13 04:02:22.564551 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 04:02:22.564556 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Dec 13 04:02:22.564561 kernel: Booting paravirtualized kernel on bare hardware Dec 13 04:02:22.564566 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 04:02:22.564571 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 04:02:22.564576 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 04:02:22.564581 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 04:02:22.564586 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 04:02:22.564592 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Dec 13 04:02:22.564597 kernel: Policy zone: Normal Dec 13 04:02:22.564603 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:02:22.564608 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 04:02:22.564613 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 04:02:22.564618 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 04:02:22.564623 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 04:02:22.564629 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 730116K reserved, 0K cma-reserved) Dec 13 04:02:22.564634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 04:02:22.564639 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 04:02:22.564644 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 04:02:22.564649 kernel: rcu: Hierarchical RCU implementation. Dec 13 04:02:22.564655 kernel: rcu: RCU event tracing is enabled. Dec 13 04:02:22.564660 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 04:02:22.564665 kernel: Rude variant of Tasks RCU enabled. Dec 13 04:02:22.564670 kernel: Tracing variant of Tasks RCU enabled. Dec 13 04:02:22.564675 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 04:02:22.564681 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 04:02:22.564686 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 04:02:22.564691 kernel: random: crng init done Dec 13 04:02:22.564696 kernel: Console: colour dummy device 80x25 Dec 13 04:02:22.564701 kernel: printk: console [tty0] enabled Dec 13 04:02:22.564706 kernel: printk: console [ttyS1] enabled Dec 13 04:02:22.564711 kernel: ACPI: Core revision 20210730 Dec 13 04:02:22.564716 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Dec 13 04:02:22.564721 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 04:02:22.564727 kernel: DMAR: Host address width 39 Dec 13 04:02:22.564732 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 04:02:22.564737 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 04:02:22.564742 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Dec 13 04:02:22.564747 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Dec 13 04:02:22.564752 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 04:02:22.564757 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 04:02:22.564762 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 04:02:22.564767 kernel: x2apic enabled Dec 13 04:02:22.564773 kernel: Switched APIC routing to cluster x2apic. Dec 13 04:02:22.564778 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 04:02:22.564783 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 04:02:22.564788 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 04:02:22.564793 kernel: process: using mwait in idle threads Dec 13 04:02:22.564798 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 04:02:22.564803 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 04:02:22.564808 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 04:02:22.564813 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 04:02:22.564819 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 04:02:22.564824 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 04:02:22.564829 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 04:02:22.564834 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 04:02:22.564839 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 04:02:22.564844 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 04:02:22.564849 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 04:02:22.564854 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 04:02:22.564860 kernel: TAA: Mitigation: TSX disabled Dec 13 04:02:22.564865 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 04:02:22.564870 kernel: SRBDS: Mitigation: Microcode Dec 13 04:02:22.564875 kernel: GDS: Vulnerable: No microcode Dec 13 04:02:22.564880 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 04:02:22.564885 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 04:02:22.564890 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 04:02:22.564895 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 04:02:22.564900 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 04:02:22.564905 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 04:02:22.564910 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 04:02:22.564915 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 04:02:22.564920 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 04:02:22.564925 kernel: Freeing SMP alternatives memory: 32K Dec 13 04:02:22.564931 kernel: pid_max: default: 32768 minimum: 301 Dec 13 04:02:22.564936 kernel: LSM: Security Framework initializing Dec 13 04:02:22.564941 kernel: SELinux: Initializing. Dec 13 04:02:22.564946 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 04:02:22.564951 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 04:02:22.564956 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 04:02:22.564961 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 04:02:22.564966 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 04:02:22.564971 kernel: ... version: 4 Dec 13 04:02:22.564976 kernel: ... bit width: 48 Dec 13 04:02:22.564981 kernel: ... generic registers: 4 Dec 13 04:02:22.564987 kernel: ... value mask: 0000ffffffffffff Dec 13 04:02:22.564992 kernel: ... max period: 00007fffffffffff Dec 13 04:02:22.564997 kernel: ... fixed-purpose events: 3 Dec 13 04:02:22.565002 kernel: ... event mask: 000000070000000f Dec 13 04:02:22.565007 kernel: signal: max sigframe size: 2032 Dec 13 04:02:22.565012 kernel: rcu: Hierarchical SRCU implementation. Dec 13 04:02:22.565017 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 04:02:22.565022 kernel: smp: Bringing up secondary CPUs ... Dec 13 04:02:22.565027 kernel: x86: Booting SMP configuration: Dec 13 04:02:22.565033 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 04:02:22.565038 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 04:02:22.565043 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 04:02:22.565048 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 04:02:22.565053 kernel: smpboot: Max logical packages: 1 Dec 13 04:02:22.565058 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 04:02:22.565063 kernel: devtmpfs: initialized Dec 13 04:02:22.565068 kernel: x86/mm: Memory block size: 128MB Dec 13 04:02:22.565073 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cc000-0x819ccfff] (4096 bytes) Dec 13 04:02:22.565079 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Dec 13 04:02:22.565084 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 04:02:22.565089 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 04:02:22.565094 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 04:02:22.565099 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 04:02:22.565104 kernel: audit: initializing netlink subsys (disabled) Dec 13 04:02:22.565109 kernel: audit: type=2000 audit(1734062537.041:1): state=initialized audit_enabled=0 res=1 Dec 13 04:02:22.565114 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 04:02:22.565119 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 04:02:22.565125 kernel: cpuidle: using governor menu Dec 13 04:02:22.565130 kernel: ACPI: bus type PCI registered Dec 13 04:02:22.565135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 04:02:22.565140 kernel: dca service started, version 1.12.1 Dec 13 04:02:22.565145 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 04:02:22.565150 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 04:02:22.565155 kernel: PCI: Using configuration type 1 for base access Dec 13 04:02:22.565160 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 04:02:22.565165 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 04:02:22.565171 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 04:02:22.565176 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 04:02:22.565181 kernel: ACPI: Added _OSI(Module Device) Dec 13 04:02:22.565186 kernel: ACPI: Added _OSI(Processor Device) Dec 13 04:02:22.565191 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 04:02:22.565196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 04:02:22.565201 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 04:02:22.565206 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 04:02:22.565211 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 04:02:22.565216 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 04:02:22.565221 kernel: ACPI: Dynamic OEM Table Load: Dec 13 04:02:22.565226 kernel: ACPI: SSDT 0xFFFF944040219D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 04:02:22.565232 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 04:02:22.565236 kernel: ACPI: Dynamic OEM Table Load: Dec 13 04:02:22.565241 kernel: ACPI: SSDT 0xFFFF944041AE5800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 04:02:22.565246 kernel: ACPI: Dynamic OEM Table Load: Dec 13 04:02:22.565251 kernel: ACPI: SSDT 0xFFFF944041A5D000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 04:02:22.565256 kernel: ACPI: Dynamic OEM Table Load: Dec 13 04:02:22.565262 kernel: ACPI: SSDT 0xFFFF944041B4B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 04:02:22.565267 kernel: ACPI: Dynamic OEM Table Load: Dec 13 04:02:22.565272 kernel: ACPI: SSDT 0xFFFF94404014B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 04:02:22.565277 kernel: ACPI: Dynamic OEM Table Load: Dec 13 04:02:22.565282 kernel: ACPI: SSDT 0xFFFF944041AE3000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 04:02:22.565287 kernel: ACPI: Interpreter enabled Dec 13 04:02:22.565292 kernel: ACPI: PM: (supports S0 S5) Dec 13 04:02:22.565297 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 04:02:22.565302 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 04:02:22.565307 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 04:02:22.565312 kernel: HEST: Table parsing has been initialized. Dec 13 04:02:22.565317 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 04:02:22.565322 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 04:02:22.565327 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 04:02:22.565333 kernel: ACPI: PM: Power Resource [USBC] Dec 13 04:02:22.565338 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 04:02:22.565343 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 04:02:22.565348 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 04:02:22.565352 kernel: ACPI: PM: Power Resource [WRST] Dec 13 04:02:22.565358 kernel: ACPI: PM: Power Resource [FN00] Dec 13 04:02:22.565363 kernel: ACPI: PM: Power Resource [FN01] Dec 13 04:02:22.565368 kernel: ACPI: PM: Power Resource [FN02] Dec 13 04:02:22.565373 kernel: ACPI: PM: Power Resource [FN03] Dec 13 04:02:22.565378 kernel: ACPI: PM: Power Resource [FN04] Dec 13 04:02:22.565383 kernel: ACPI: PM: Power Resource [PIN] Dec 13 04:02:22.565388 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 04:02:22.565454 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 04:02:22.565503 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 04:02:22.565545 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 04:02:22.565552 kernel: PCI host bridge to bus 0000:00 Dec 13 04:02:22.565595 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 04:02:22.565633 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 04:02:22.565671 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 04:02:22.565707 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Dec 13 04:02:22.565745 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 04:02:22.565782 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 04:02:22.565834 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 04:02:22.565884 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 04:02:22.565929 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.565975 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 04:02:22.566020 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Dec 13 04:02:22.566066 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 04:02:22.566109 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Dec 13 04:02:22.566156 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 04:02:22.566200 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Dec 13 04:02:22.566244 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 04:02:22.566291 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 04:02:22.566334 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Dec 13 04:02:22.566376 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Dec 13 04:02:22.566421 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 04:02:22.566467 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 04:02:22.566515 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 04:02:22.566559 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 04:02:22.566604 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 04:02:22.566647 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Dec 13 04:02:22.566688 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 04:02:22.566733 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 04:02:22.566775 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Dec 13 04:02:22.566818 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 04:02:22.566864 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 04:02:22.566907 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Dec 13 04:02:22.566948 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 04:02:22.566994 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 04:02:22.567036 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Dec 13 04:02:22.567080 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Dec 13 04:02:22.567128 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Dec 13 04:02:22.567172 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Dec 13 04:02:22.567214 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Dec 13 04:02:22.567255 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Dec 13 04:02:22.567296 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 04:02:22.567343 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 04:02:22.567386 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.567433 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 04:02:22.567480 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.567529 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 04:02:22.567586 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.567631 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 04:02:22.567673 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.567719 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Dec 13 04:02:22.567763 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.567809 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 04:02:22.567851 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 04:02:22.567898 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 04:02:22.567944 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 04:02:22.567987 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Dec 13 04:02:22.568027 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 04:02:22.568075 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 04:02:22.568117 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 04:02:22.568167 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 04:02:22.568210 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 04:02:22.568254 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Dec 13 04:02:22.568296 kernel: pci 0000:01:00.0: PME# supported from D3cold Dec 13 04:02:22.568340 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 04:02:22.568382 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 04:02:22.568430 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 04:02:22.568521 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 04:02:22.568564 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Dec 13 04:02:22.568607 kernel: pci 0000:01:00.1: PME# supported from D3cold Dec 13 04:02:22.568650 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 04:02:22.568693 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 04:02:22.568735 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 04:02:22.568779 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 04:02:22.568823 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 04:02:22.568865 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 04:02:22.568913 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Dec 13 04:02:22.568957 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Dec 13 04:02:22.568999 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Dec 13 04:02:22.569043 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 04:02:22.569086 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Dec 13 04:02:22.569131 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.569227 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 04:02:22.569269 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 04:02:22.569311 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 04:02:22.569357 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 04:02:22.569401 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 04:02:22.569446 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Dec 13 04:02:22.569528 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 04:02:22.569573 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Dec 13 04:02:22.569617 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 04:02:22.569658 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 04:02:22.569701 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 04:02:22.569742 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 04:02:22.569784 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 04:02:22.569831 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 04:02:22.569874 kernel: pci 0000:06:00.0: enabling Extended Tags Dec 13 04:02:22.569920 kernel: pci 0000:06:00.0: supports D1 D2 Dec 13 04:02:22.569963 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 04:02:22.570005 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 04:02:22.570046 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 04:02:22.570088 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 04:02:22.570136 kernel: pci_bus 0000:07: extended config space not accessible Dec 13 04:02:22.570187 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 04:02:22.570234 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Dec 13 04:02:22.570281 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Dec 13 04:02:22.570326 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 04:02:22.570372 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 04:02:22.570416 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 04:02:22.570490 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 04:02:22.570553 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 04:02:22.570598 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 04:02:22.570642 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 04:02:22.570649 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 04:02:22.570655 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 04:02:22.570660 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 04:02:22.570667 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 04:02:22.570672 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 04:02:22.570677 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 04:02:22.570682 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 04:02:22.570688 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 04:02:22.570694 kernel: iommu: Default domain type: Translated Dec 13 04:02:22.570699 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 04:02:22.570742 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Dec 13 04:02:22.570789 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 04:02:22.570833 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Dec 13 04:02:22.570840 kernel: vgaarb: loaded Dec 13 04:02:22.570845 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 04:02:22.570851 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 04:02:22.570857 kernel: PTP clock support registered Dec 13 04:02:22.570863 kernel: PCI: Using ACPI for IRQ routing Dec 13 04:02:22.570868 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 04:02:22.570873 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 04:02:22.570879 kernel: e820: reserve RAM buffer [mem 0x819cc000-0x83ffffff] Dec 13 04:02:22.570884 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Dec 13 04:02:22.570889 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Dec 13 04:02:22.570894 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Dec 13 04:02:22.570900 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Dec 13 04:02:22.570905 kernel: clocksource: Switched to clocksource tsc-early Dec 13 04:02:22.570910 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 04:02:22.570916 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 04:02:22.570921 kernel: pnp: PnP ACPI init Dec 13 04:02:22.570964 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 04:02:22.571006 kernel: pnp 00:02: [dma 0 disabled] Dec 13 04:02:22.571047 kernel: pnp 00:03: [dma 0 disabled] Dec 13 04:02:22.571092 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 04:02:22.571130 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 04:02:22.571170 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 04:02:22.571211 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 04:02:22.571249 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 04:02:22.571286 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 04:02:22.571325 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 04:02:22.571362 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 04:02:22.571399 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 04:02:22.571436 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 04:02:22.571513 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 04:02:22.571553 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 04:02:22.571591 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 04:02:22.571630 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 04:02:22.571667 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 04:02:22.571705 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 04:02:22.571741 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 04:02:22.571779 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 04:02:22.571818 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 04:02:22.571826 kernel: pnp: PnP ACPI: found 10 devices Dec 13 04:02:22.571832 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 04:02:22.571838 kernel: NET: Registered PF_INET protocol family Dec 13 04:02:22.571843 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 04:02:22.571849 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 04:02:22.571854 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 04:02:22.571859 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 04:02:22.571864 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 04:02:22.571870 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 04:02:22.571875 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 04:02:22.571881 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 04:02:22.571886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 04:02:22.571892 kernel: NET: Registered PF_XDP protocol family Dec 13 04:02:22.571933 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Dec 13 04:02:22.571975 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Dec 13 04:02:22.572016 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Dec 13 04:02:22.572062 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 04:02:22.572104 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 04:02:22.572150 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 04:02:22.572194 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 04:02:22.572237 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 04:02:22.572279 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 04:02:22.572321 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 04:02:22.572363 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 04:02:22.572407 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 04:02:22.572453 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 04:02:22.572540 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 04:02:22.572583 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 04:02:22.572626 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 04:02:22.572668 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 04:02:22.572710 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 04:02:22.572755 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 04:02:22.572798 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 04:02:22.572841 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 04:02:22.572882 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 04:02:22.572924 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 04:02:22.572966 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 04:02:22.573004 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 04:02:22.573041 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 04:02:22.573078 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 04:02:22.573115 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 04:02:22.573152 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Dec 13 04:02:22.573188 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 04:02:22.573231 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Dec 13 04:02:22.573270 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 04:02:22.573315 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Dec 13 04:02:22.573355 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Dec 13 04:02:22.573397 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 04:02:22.573436 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Dec 13 04:02:22.573523 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Dec 13 04:02:22.573563 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Dec 13 04:02:22.573604 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 04:02:22.573645 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Dec 13 04:02:22.573653 kernel: PCI: CLS 64 bytes, default 64 Dec 13 04:02:22.573659 kernel: DMAR: No ATSR found Dec 13 04:02:22.573664 kernel: DMAR: No SATC found Dec 13 04:02:22.573669 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 04:02:22.573710 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 04:02:22.573754 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 04:02:22.573795 kernel: pci 0000:00:08.0: Adding to iommu group 2 Dec 13 04:02:22.573837 kernel: pci 0000:00:12.0: Adding to iommu group 3 Dec 13 04:02:22.573881 kernel: pci 0000:00:14.0: Adding to iommu group 4 Dec 13 04:02:22.573922 kernel: pci 0000:00:14.2: Adding to iommu group 4 Dec 13 04:02:22.573963 kernel: pci 0000:00:15.0: Adding to iommu group 5 Dec 13 04:02:22.574004 kernel: pci 0000:00:15.1: Adding to iommu group 5 Dec 13 04:02:22.574045 kernel: pci 0000:00:16.0: Adding to iommu group 6 Dec 13 04:02:22.574086 kernel: pci 0000:00:16.1: Adding to iommu group 6 Dec 13 04:02:22.574127 kernel: pci 0000:00:16.4: Adding to iommu group 6 Dec 13 04:02:22.574168 kernel: pci 0000:00:17.0: Adding to iommu group 7 Dec 13 04:02:22.574209 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Dec 13 04:02:22.574254 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Dec 13 04:02:22.574296 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Dec 13 04:02:22.574338 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Dec 13 04:02:22.574379 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Dec 13 04:02:22.574421 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Dec 13 04:02:22.574487 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Dec 13 04:02:22.574529 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Dec 13 04:02:22.574571 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Dec 13 04:02:22.574617 kernel: pci 0000:01:00.0: Adding to iommu group 1 Dec 13 04:02:22.574661 kernel: pci 0000:01:00.1: Adding to iommu group 1 Dec 13 04:02:22.574705 kernel: pci 0000:03:00.0: Adding to iommu group 15 Dec 13 04:02:22.574749 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 04:02:22.574792 kernel: pci 0000:06:00.0: Adding to iommu group 17 Dec 13 04:02:22.574839 kernel: pci 0000:07:00.0: Adding to iommu group 17 Dec 13 04:02:22.574846 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 04:02:22.574852 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 04:02:22.574859 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Dec 13 04:02:22.574864 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Dec 13 04:02:22.574870 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 04:02:22.574875 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 04:02:22.574880 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 04:02:22.574925 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 04:02:22.574933 kernel: Initialise system trusted keyrings Dec 13 04:02:22.574938 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 04:02:22.574945 kernel: Key type asymmetric registered Dec 13 04:02:22.574950 kernel: Asymmetric key parser 'x509' registered Dec 13 04:02:22.574955 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 04:02:22.574961 kernel: io scheduler mq-deadline registered Dec 13 04:02:22.574966 kernel: io scheduler kyber registered Dec 13 04:02:22.574971 kernel: io scheduler bfq registered Dec 13 04:02:22.575015 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Dec 13 04:02:22.575057 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Dec 13 04:02:22.575101 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Dec 13 04:02:22.575147 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Dec 13 04:02:22.575190 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Dec 13 04:02:22.575234 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Dec 13 04:02:22.575281 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 04:02:22.575289 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 04:02:22.575295 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 04:02:22.575300 kernel: pstore: Registered erst as persistent store backend Dec 13 04:02:22.575305 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 04:02:22.575312 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 04:02:22.575318 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 04:02:22.575323 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 04:02:22.575328 kernel: hpet_acpi_add: no address or irqs in _CRS Dec 13 04:02:22.575373 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 04:02:22.575381 kernel: i8042: PNP: No PS/2 controller found. Dec 13 04:02:22.575419 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 04:02:22.575462 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 04:02:22.575502 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T04:02:21 UTC (1734062541) Dec 13 04:02:22.575540 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 04:02:22.575548 kernel: fail to initialize ptp_kvm Dec 13 04:02:22.575553 kernel: intel_pstate: Intel P-state driver initializing Dec 13 04:02:22.575558 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 04:02:22.575564 kernel: intel_pstate: HWP enabled Dec 13 04:02:22.575569 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 04:02:22.575574 kernel: vesafb: scrolling: redraw Dec 13 04:02:22.575581 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 04:02:22.575586 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000035faa5eb, using 768k, total 768k Dec 13 04:02:22.575592 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 04:02:22.575597 kernel: fb0: VESA VGA frame buffer device Dec 13 04:02:22.575603 kernel: NET: Registered PF_INET6 protocol family Dec 13 04:02:22.575608 kernel: Segment Routing with IPv6 Dec 13 04:02:22.575613 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 04:02:22.575619 kernel: NET: Registered PF_PACKET protocol family Dec 13 04:02:22.575624 kernel: Key type dns_resolver registered Dec 13 04:02:22.575630 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 04:02:22.575636 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 04:02:22.575641 kernel: IPI shorthand broadcast: enabled Dec 13 04:02:22.575646 kernel: sched_clock: Marking stable (1736185082, 1339452118)->(4518417148, -1442779948) Dec 13 04:02:22.575652 kernel: registered taskstats version 1 Dec 13 04:02:22.575657 kernel: Loading compiled-in X.509 certificates Dec 13 04:02:22.575662 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 04:02:22.575668 kernel: Key type .fscrypt registered Dec 13 04:02:22.575673 kernel: Key type fscrypt-provisioning registered Dec 13 04:02:22.575679 kernel: pstore: Using crash dump compression: deflate Dec 13 04:02:22.575685 kernel: ima: Allocated hash algorithm: sha1 Dec 13 04:02:22.575690 kernel: ima: No architecture policies found Dec 13 04:02:22.575695 kernel: clk: Disabling unused clocks Dec 13 04:02:22.575701 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 04:02:22.575706 kernel: Write protecting the kernel read-only data: 28672k Dec 13 04:02:22.575711 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 04:02:22.575717 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 04:02:22.575722 kernel: Run /init as init process Dec 13 04:02:22.575728 kernel: with arguments: Dec 13 04:02:22.575734 kernel: /init Dec 13 04:02:22.575739 kernel: with environment: Dec 13 04:02:22.575744 kernel: HOME=/ Dec 13 04:02:22.575749 kernel: TERM=linux Dec 13 04:02:22.575754 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 04:02:22.575761 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:02:22.575768 systemd[1]: Detected architecture x86-64. Dec 13 04:02:22.575774 systemd[1]: Running in initrd. Dec 13 04:02:22.575780 systemd[1]: No hostname configured, using default hostname. Dec 13 04:02:22.575785 systemd[1]: Hostname set to . Dec 13 04:02:22.575790 systemd[1]: Initializing machine ID from random generator. Dec 13 04:02:22.575796 systemd[1]: Queued start job for default target initrd.target. Dec 13 04:02:22.575801 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:02:22.575807 systemd[1]: Reached target cryptsetup.target. Dec 13 04:02:22.575812 systemd[1]: Reached target paths.target. Dec 13 04:02:22.575818 systemd[1]: Reached target slices.target. Dec 13 04:02:22.575824 systemd[1]: Reached target swap.target. Dec 13 04:02:22.575829 systemd[1]: Reached target timers.target. Dec 13 04:02:22.575835 systemd[1]: Listening on iscsid.socket. Dec 13 04:02:22.575840 systemd[1]: Listening on iscsiuio.socket. Dec 13 04:02:22.575846 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 04:02:22.575852 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 04:02:22.575858 systemd[1]: Listening on systemd-journald.socket. Dec 13 04:02:22.575863 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Dec 13 04:02:22.575869 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Dec 13 04:02:22.575874 kernel: clocksource: Switched to clocksource tsc Dec 13 04:02:22.575880 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:02:22.575885 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:02:22.575891 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:02:22.575897 systemd[1]: Reached target sockets.target. Dec 13 04:02:22.575902 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:02:22.575909 systemd[1]: Finished network-cleanup.service. Dec 13 04:02:22.575914 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 04:02:22.575920 systemd[1]: Starting systemd-journald.service... Dec 13 04:02:22.575925 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:02:22.575933 systemd-journald[267]: Journal started Dec 13 04:02:22.575959 systemd-journald[267]: Runtime Journal (/run/log/journal/61c004390ffa41b28deb0596c1d8215d) is 8.0M, max 640.1M, 632.1M free. Dec 13 04:02:22.577800 systemd-modules-load[268]: Inserted module 'overlay' Dec 13 04:02:22.607770 kernel: audit: type=1334 audit(1734062542.584:2): prog-id=6 op=LOAD Dec 13 04:02:22.607780 systemd[1]: Starting systemd-resolved.service... Dec 13 04:02:22.584000 audit: BPF prog-id=6 op=LOAD Dec 13 04:02:22.651488 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 04:02:22.651505 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 04:02:22.684477 kernel: Bridge firewalling registered Dec 13 04:02:22.684493 systemd[1]: Started systemd-journald.service. Dec 13 04:02:22.698932 systemd-modules-load[268]: Inserted module 'br_netfilter' Dec 13 04:02:22.747003 kernel: audit: type=1130 audit(1734062542.706:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.701406 systemd-resolved[270]: Positive Trust Anchors: Dec 13 04:02:22.804414 kernel: SCSI subsystem initialized Dec 13 04:02:22.804425 kernel: audit: type=1130 audit(1734062542.759:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.701412 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:02:22.925524 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 04:02:22.925536 kernel: audit: type=1130 audit(1734062542.830:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.925544 kernel: device-mapper: uevent: version 1.0.3 Dec 13 04:02:22.925551 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 04:02:22.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.701433 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:02:22.998642 kernel: audit: type=1130 audit(1734062542.934:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.703005 systemd-resolved[270]: Defaulting to hostname 'linux'. Dec 13 04:02:23.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.706669 systemd[1]: Started systemd-resolved.service. Dec 13 04:02:23.105681 kernel: audit: type=1130 audit(1734062543.006:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.105692 kernel: audit: type=1130 audit(1734062543.059:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:22.760612 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:02:22.830592 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 04:02:22.928346 systemd-modules-load[268]: Inserted module 'dm_multipath' Dec 13 04:02:22.934969 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:02:23.006775 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 04:02:23.059722 systemd[1]: Reached target nss-lookup.target. Dec 13 04:02:23.115034 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 04:02:23.134956 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:02:23.143039 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 04:02:23.143770 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:02:23.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.145833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 04:02:23.191509 kernel: audit: type=1130 audit(1734062543.143:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.207772 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 04:02:23.271550 kernel: audit: type=1130 audit(1734062543.207:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.257939 systemd[1]: Starting dracut-cmdline.service... Dec 13 04:02:23.287559 dracut-cmdline[295]: dracut-dracut-053 Dec 13 04:02:23.287559 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 04:02:23.287559 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:02:23.354499 kernel: Loading iSCSI transport class v2.0-870. Dec 13 04:02:23.354512 kernel: iscsi: registered transport (tcp) Dec 13 04:02:23.404087 kernel: iscsi: registered transport (qla4xxx) Dec 13 04:02:23.404137 kernel: QLogic iSCSI HBA Driver Dec 13 04:02:23.420069 systemd[1]: Finished dracut-cmdline.service. Dec 13 04:02:23.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:23.429128 systemd[1]: Starting dracut-pre-udev.service... Dec 13 04:02:23.485516 kernel: raid6: avx2x4 gen() 48105 MB/s Dec 13 04:02:23.520506 kernel: raid6: avx2x4 xor() 14113 MB/s Dec 13 04:02:23.555472 kernel: raid6: avx2x2 gen() 51906 MB/s Dec 13 04:02:23.590521 kernel: raid6: avx2x2 xor() 32119 MB/s Dec 13 04:02:23.625472 kernel: raid6: avx2x1 gen() 44474 MB/s Dec 13 04:02:23.659506 kernel: raid6: avx2x1 xor() 27911 MB/s Dec 13 04:02:23.693473 kernel: raid6: sse2x4 gen() 21407 MB/s Dec 13 04:02:23.727519 kernel: raid6: sse2x4 xor() 11961 MB/s Dec 13 04:02:23.761472 kernel: raid6: sse2x2 gen() 21647 MB/s Dec 13 04:02:23.795522 kernel: raid6: sse2x2 xor() 13416 MB/s Dec 13 04:02:23.829472 kernel: raid6: sse2x1 gen() 18301 MB/s Dec 13 04:02:23.881522 kernel: raid6: sse2x1 xor() 8933 MB/s Dec 13 04:02:23.881537 kernel: raid6: using algorithm avx2x2 gen() 51906 MB/s Dec 13 04:02:23.881545 kernel: raid6: .... xor() 32119 MB/s, rmw enabled Dec 13 04:02:23.899765 kernel: raid6: using avx2x2 recovery algorithm Dec 13 04:02:23.946465 kernel: xor: automatically using best checksumming function avx Dec 13 04:02:24.025493 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 04:02:24.030549 systemd[1]: Finished dracut-pre-udev.service. Dec 13 04:02:24.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:24.039000 audit: BPF prog-id=7 op=LOAD Dec 13 04:02:24.039000 audit: BPF prog-id=8 op=LOAD Dec 13 04:02:24.040455 systemd[1]: Starting systemd-udevd.service... Dec 13 04:02:24.048602 systemd-udevd[475]: Using default interface naming scheme 'v252'. Dec 13 04:02:24.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:24.055699 systemd[1]: Started systemd-udevd.service. Dec 13 04:02:24.096563 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Dec 13 04:02:24.073113 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 04:02:24.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:24.101948 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 04:02:24.114624 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:02:24.166196 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:02:24.197810 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 04:02:24.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:24.217449 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 04:02:24.217485 kernel: libata version 3.00 loaded. Dec 13 04:02:24.217494 kernel: AES CTR mode by8 optimization enabled Dec 13 04:02:24.234452 kernel: ACPI: bus type USB registered Dec 13 04:02:24.269066 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 04:02:24.269095 kernel: usbcore: registered new interface driver usbfs Dec 13 04:02:24.269102 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 04:02:24.286919 kernel: usbcore: registered new interface driver hub Dec 13 04:02:24.338599 kernel: usbcore: registered new device driver usb Dec 13 04:02:24.374646 kernel: pps pps0: new PPS source ptp0 Dec 13 04:02:24.374736 kernel: igb 0000:03:00.0: added PHC on eth0 Dec 13 04:02:24.443186 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 04:02:24.443244 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:44 Dec 13 04:02:24.443296 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Dec 13 04:02:24.443347 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 04:02:24.462219 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 04:02:24.550381 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Dec 13 04:02:24.550470 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 04:02:24.550524 kernel: pps pps1: new PPS source ptp1 Dec 13 04:02:24.550582 kernel: igb 0000:04:00.0: added PHC on eth1 Dec 13 04:02:24.596051 kernel: scsi host0: ahci Dec 13 04:02:24.596113 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 04:02:24.596166 kernel: scsi host1: ahci Dec 13 04:02:24.596223 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 Dec 13 04:02:25.107992 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 04:02:25.108057 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:45 Dec 13 04:02:25.108112 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Dec 13 04:02:25.108163 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 04:02:25.108214 kernel: scsi host2: ahci Dec 13 04:02:25.108269 kernel: scsi host3: ahci Dec 13 04:02:25.108324 kernel: scsi host4: ahci Dec 13 04:02:25.108376 kernel: scsi host5: ahci Dec 13 04:02:25.108430 kernel: scsi host6: ahci Dec 13 04:02:25.108531 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Dec 13 04:02:25.108538 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Dec 13 04:02:25.108545 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Dec 13 04:02:25.108551 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Dec 13 04:02:25.108558 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Dec 13 04:02:25.108565 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Dec 13 04:02:25.108572 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Dec 13 04:02:25.108579 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 04:02:25.108629 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 04:02:25.108677 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 04:02:25.108726 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 04:02:25.108773 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 04:02:25.108823 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 04:02:25.108872 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 04:02:25.108879 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 04:02:25.108925 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 04:02:25.108933 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 04:02:25.108980 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 04:02:25.108987 kernel: hub 1-0:1.0: USB hub found Dec 13 04:02:25.109046 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 04:02:25.109054 kernel: hub 1-0:1.0: 16 ports detected Dec 13 04:02:25.109107 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 04:02:25.109115 kernel: hub 2-0:1.0: USB hub found Dec 13 04:02:25.109171 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 04:02:25.109178 kernel: hub 2-0:1.0: 10 ports detected Dec 13 04:02:25.109230 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Dec 13 04:02:25.109237 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Dec 13 04:02:25.109243 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 04:02:25.109251 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 04:02:25.109301 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 04:02:25.109308 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 Dec 13 04:02:25.919484 kernel: ata2.00: Features: NCQ-prio Dec 13 04:02:25.919501 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 04:02:25.919513 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 04:02:25.919593 kernel: ata1.00: Features: NCQ-prio Dec 13 04:02:25.919602 kernel: ata2.00: configured for UDMA/133 Dec 13 04:02:25.919612 kernel: ata1.00: configured for UDMA/133 Dec 13 04:02:25.919619 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Dec 13 04:02:25.919685 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 04:02:25.919796 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Dec 13 04:02:25.919919 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Dec 13 04:02:25.920009 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 04:02:25.920023 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 04:02:25.920035 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 04:02:25.920101 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 04:02:25.920177 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 04:02:25.920235 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Dec 13 04:02:25.920289 kernel: sd 1:0:0:0: [sdb] Write Protect is off Dec 13 04:02:25.920342 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 04:02:25.920520 kernel: hub 1-14:1.0: USB hub found Dec 13 04:02:25.920582 kernel: hub 1-14:1.0: 4 ports detected Dec 13 04:02:25.920638 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 04:02:25.920693 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 04:02:25.920746 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 04:02:25.920754 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 04:02:25.920806 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 04:02:25.920814 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 04:02:25.920867 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 04:02:25.920920 kernel: port_module: 9 callbacks suppressed Dec 13 04:02:25.920928 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Dec 13 04:02:25.920978 kernel: GPT:9289727 != 937703087 Dec 13 04:02:25.920985 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 04:02:25.920991 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 04:02:25.920998 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 04:02:25.921004 kernel: GPT:9289727 != 937703087 Dec 13 04:02:25.921010 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 04:02:25.921064 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:02:25.921072 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 04:02:25.921079 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 04:02:25.921172 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 04:02:25.921179 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Dec 13 04:02:25.921237 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 04:02:25.921287 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Dec 13 04:02:25.921342 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (559) Dec 13 04:02:25.921351 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 04:02:25.921358 kernel: usbcore: registered new interface driver usbhid Dec 13 04:02:25.921364 kernel: usbhid: USB HID core driver Dec 13 04:02:25.921371 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 04:02:25.921378 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 04:02:25.921427 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Dec 13 04:02:25.788515 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 04:02:25.962559 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 04:02:25.973557 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 04:02:25.816560 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 04:02:26.130769 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 04:02:26.130781 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 04:02:26.130788 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 04:02:26.130872 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Dec 13 04:02:26.130930 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 04:02:26.130938 kernel: GPT:disk_guids don't match. Dec 13 04:02:26.130945 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:02:25.835249 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 04:02:26.173559 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 04:02:26.173571 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 04:02:25.879527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 04:02:26.189546 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 04:02:26.189557 disk-uuid[678]: Primary Header is updated. Dec 13 04:02:26.189557 disk-uuid[678]: Secondary Entries is updated. Dec 13 04:02:26.189557 disk-uuid[678]: Secondary Header is updated. Dec 13 04:02:25.920327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:02:25.938962 systemd[1]: Starting disk-uuid.service... Dec 13 04:02:27.161197 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 04:02:27.181275 disk-uuid[679]: The operation has completed successfully. Dec 13 04:02:27.190648 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 04:02:27.222466 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 04:02:27.322095 kernel: audit: type=1130 audit(1734062547.229:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.322110 kernel: audit: type=1131 audit(1734062547.229:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.222560 systemd[1]: Finished disk-uuid.service. Dec 13 04:02:27.357523 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 04:02:27.230146 systemd[1]: Starting verity-setup.service... Dec 13 04:02:27.387085 systemd[1]: Found device dev-mapper-usr.device. Dec 13 04:02:27.396697 systemd[1]: Mounting sysusr-usr.mount... Dec 13 04:02:27.410860 systemd[1]: Finished verity-setup.service. Dec 13 04:02:27.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.472494 kernel: audit: type=1130 audit(1734062547.425:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.529297 systemd[1]: Mounted sysusr-usr.mount. Dec 13 04:02:27.544660 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 04:02:27.536731 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 04:02:27.655535 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:02:27.655555 kernel: BTRFS info (device sdb6): using free space tree Dec 13 04:02:27.655563 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 04:02:27.655570 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 04:02:27.537132 systemd[1]: Starting ignition-setup.service... Dec 13 04:02:27.553001 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 04:02:27.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.664114 systemd[1]: Finished ignition-setup.service. Dec 13 04:02:27.787412 kernel: audit: type=1130 audit(1734062547.679:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.787426 kernel: audit: type=1130 audit(1734062547.737:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.679871 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 04:02:27.796000 audit: BPF prog-id=9 op=LOAD Dec 13 04:02:27.738091 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 04:02:27.834564 kernel: audit: type=1334 audit(1734062547.796:24): prog-id=9 op=LOAD Dec 13 04:02:27.797400 systemd[1]: Starting systemd-networkd.service... Dec 13 04:02:27.834393 systemd-networkd[878]: lo: Link UP Dec 13 04:02:27.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.860093 ignition[866]: Ignition 2.14.0 Dec 13 04:02:27.918550 kernel: audit: type=1130 audit(1734062547.852:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.834395 systemd-networkd[878]: lo: Gained carrier Dec 13 04:02:27.860098 ignition[866]: Stage: fetch-offline Dec 13 04:02:27.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.834780 systemd-networkd[878]: Enumeration completed Dec 13 04:02:28.069590 kernel: audit: type=1130 audit(1734062547.933:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:28.069602 kernel: audit: type=1130 audit(1734062547.995:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:28.069609 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 04:02:27.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.860139 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:02:28.105531 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Dec 13 04:02:27.834850 systemd[1]: Started systemd-networkd.service. Dec 13 04:02:27.860152 ignition[866]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 04:02:27.835519 systemd-networkd[878]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:02:28.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.868183 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 04:02:28.157484 iscsid[903]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:02:28.157484 iscsid[903]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 04:02:28.157484 iscsid[903]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 04:02:28.157484 iscsid[903]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 04:02:28.157484 iscsid[903]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 04:02:28.157484 iscsid[903]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:02:28.157484 iscsid[903]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 04:02:28.312601 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 04:02:28.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.852513 systemd[1]: Reached target network.target. Dec 13 04:02:28.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:27.868250 ignition[866]: parsed url from cmdline: "" Dec 13 04:02:27.871138 unknown[866]: fetched base config from "system" Dec 13 04:02:27.868252 ignition[866]: no config URL provided Dec 13 04:02:27.871142 unknown[866]: fetched user config from "system" Dec 13 04:02:27.868255 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:02:27.913210 systemd[1]: Starting iscsiuio.service... Dec 13 04:02:27.868267 ignition[866]: parsing config with SHA512: 0f8a556f213643a51c589889aa5cdb6a1df71566890614a23b2a4a3d2bf865617f49ce0cd33f3ac7ba74e6e9a452a87b956e81ab10f056918a865b0d6aed0cc5 Dec 13 04:02:27.925794 systemd[1]: Started iscsiuio.service. Dec 13 04:02:27.871327 ignition[866]: fetch-offline: fetch-offline passed Dec 13 04:02:27.933879 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 04:02:27.871329 ignition[866]: POST message to Packet Timeline Dec 13 04:02:27.995688 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 04:02:27.871334 ignition[866]: POST Status error: resource requires networking Dec 13 04:02:27.996142 systemd[1]: Starting ignition-kargs.service... Dec 13 04:02:27.871368 ignition[866]: Ignition finished successfully Dec 13 04:02:28.070634 systemd-networkd[878]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:02:28.074581 ignition[891]: Ignition 2.14.0 Dec 13 04:02:28.084228 systemd[1]: Starting iscsid.service... Dec 13 04:02:28.074585 ignition[891]: Stage: kargs Dec 13 04:02:28.112660 systemd[1]: Started iscsid.service. Dec 13 04:02:28.074642 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:02:28.127041 systemd[1]: Starting dracut-initqueue.service... Dec 13 04:02:28.074653 ignition[891]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 04:02:28.146706 systemd[1]: Finished dracut-initqueue.service. Dec 13 04:02:28.075954 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 04:02:28.165620 systemd[1]: Reached target remote-fs-pre.target. Dec 13 04:02:28.077385 ignition[891]: kargs: kargs passed Dec 13 04:02:28.210635 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:02:28.077388 ignition[891]: POST message to Packet Timeline Dec 13 04:02:28.240750 systemd[1]: Reached target remote-fs.target. Dec 13 04:02:28.077399 ignition[891]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 04:02:28.268250 systemd[1]: Starting dracut-pre-mount.service... Dec 13 04:02:28.081828 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37184->[::1]:53: read: connection refused Dec 13 04:02:28.296244 systemd[1]: Finished dracut-pre-mount.service. Dec 13 04:02:28.282196 ignition[891]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 04:02:28.297512 systemd-networkd[878]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:02:28.282600 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35728->[::1]:53: read: connection refused Dec 13 04:02:28.325924 systemd-networkd[878]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:02:28.355075 systemd-networkd[878]: enp1s0f1np1: Link UP Dec 13 04:02:28.355245 systemd-networkd[878]: enp1s0f1np1: Gained carrier Dec 13 04:02:28.360743 systemd-networkd[878]: enp1s0f0np0: Link UP Dec 13 04:02:28.360905 systemd-networkd[878]: eno2: Link UP Dec 13 04:02:28.361055 systemd-networkd[878]: eno1: Link UP Dec 13 04:02:28.683209 ignition[891]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 04:02:28.684551 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56237->[::1]:53: read: connection refused Dec 13 04:02:29.118919 systemd-networkd[878]: enp1s0f0np0: Gained carrier Dec 13 04:02:29.128693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Dec 13 04:02:29.153627 systemd-networkd[878]: enp1s0f0np0: DHCPv4 address 147.28.180.253/31, gateway 147.28.180.252 acquired from 145.40.83.140 Dec 13 04:02:29.485048 ignition[891]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 04:02:29.486079 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47794->[::1]:53: read: connection refused Dec 13 04:02:29.927931 systemd-networkd[878]: enp1s0f1np1: Gained IPv6LL Dec 13 04:02:30.952049 systemd-networkd[878]: enp1s0f0np0: Gained IPv6LL Dec 13 04:02:31.087738 ignition[891]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 04:02:31.089080 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49364->[::1]:53: read: connection refused Dec 13 04:02:34.292470 ignition[891]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 04:02:35.060832 ignition[891]: GET result: OK Dec 13 04:02:35.370981 ignition[891]: Ignition finished successfully Dec 13 04:02:35.375958 systemd[1]: Finished ignition-kargs.service. Dec 13 04:02:35.461175 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 04:02:35.461206 kernel: audit: type=1130 audit(1734062555.386:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:35.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:35.395088 ignition[920]: Ignition 2.14.0 Dec 13 04:02:35.388755 systemd[1]: Starting ignition-disks.service... Dec 13 04:02:35.395091 ignition[920]: Stage: disks Dec 13 04:02:35.395166 ignition[920]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:02:35.395175 ignition[920]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 04:02:35.397568 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 04:02:35.398044 ignition[920]: disks: disks passed Dec 13 04:02:35.398047 ignition[920]: POST message to Packet Timeline Dec 13 04:02:35.398058 ignition[920]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 04:02:35.812781 ignition[920]: GET result: OK Dec 13 04:02:36.147115 ignition[920]: Ignition finished successfully Dec 13 04:02:36.150096 systemd[1]: Finished ignition-disks.service. Dec 13 04:02:36.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.164054 systemd[1]: Reached target initrd-root-device.target. Dec 13 04:02:36.240710 kernel: audit: type=1130 audit(1734062556.163:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.226662 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:02:36.226697 systemd[1]: Reached target local-fs.target. Dec 13 04:02:36.249684 systemd[1]: Reached target sysinit.target. Dec 13 04:02:36.264669 systemd[1]: Reached target basic.target. Dec 13 04:02:36.278325 systemd[1]: Starting systemd-fsck-root.service... Dec 13 04:02:36.297011 systemd-fsck[933]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 04:02:36.311054 systemd[1]: Finished systemd-fsck-root.service. Dec 13 04:02:36.401427 kernel: audit: type=1130 audit(1734062556.319:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.401463 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 04:02:36.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.325485 systemd[1]: Mounting sysroot.mount... Dec 13 04:02:36.410066 systemd[1]: Mounted sysroot.mount. Dec 13 04:02:36.424707 systemd[1]: Reached target initrd-root-fs.target. Dec 13 04:02:36.432244 systemd[1]: Mounting sysroot-usr.mount... Dec 13 04:02:36.454462 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 04:02:36.462082 systemd[1]: Starting flatcar-static-network.service... Dec 13 04:02:36.483542 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 04:02:36.483583 systemd[1]: Reached target ignition-diskful.target. Dec 13 04:02:36.502628 systemd[1]: Mounted sysroot-usr.mount. Dec 13 04:02:36.526185 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:02:36.659384 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (944) Dec 13 04:02:36.659402 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:02:36.659410 kernel: BTRFS info (device sdb6): using free space tree Dec 13 04:02:36.659418 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 04:02:36.659425 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 04:02:36.659492 coreos-metadata[941]: Dec 13 04:02:36.586 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 04:02:36.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.720659 coreos-metadata[940]: Dec 13 04:02:36.574 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 04:02:36.742671 kernel: audit: type=1130 audit(1734062556.667:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.536874 systemd[1]: Starting initrd-setup-root.service... Dec 13 04:02:36.582680 systemd[1]: Finished initrd-setup-root.service. Dec 13 04:02:36.762546 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 04:02:36.668745 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:02:36.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.815545 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Dec 13 04:02:36.849650 kernel: audit: type=1130 audit(1734062556.786:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:36.729050 systemd[1]: Starting ignition-mount.service... Dec 13 04:02:36.856662 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 04:02:36.866657 ignition[1014]: INFO : Ignition 2.14.0 Dec 13 04:02:36.866657 ignition[1014]: INFO : Stage: mount Dec 13 04:02:36.866657 ignition[1014]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:02:36.866657 ignition[1014]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 04:02:36.866657 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 04:02:36.866657 ignition[1014]: INFO : mount: mount passed Dec 13 04:02:36.866657 ignition[1014]: INFO : POST message to Packet Timeline Dec 13 04:02:36.866657 ignition[1014]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 04:02:36.756050 systemd[1]: Starting sysroot-boot.service... Dec 13 04:02:36.957777 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 04:02:36.769959 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 04:02:36.770012 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 04:02:36.775804 systemd[1]: Finished sysroot-boot.service. Dec 13 04:02:37.061120 coreos-metadata[941]: Dec 13 04:02:37.061 INFO Fetch successful Dec 13 04:02:37.088955 coreos-metadata[940]: Dec 13 04:02:37.088 INFO Fetch successful Dec 13 04:02:37.090073 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 04:02:37.218955 kernel: audit: type=1130 audit(1734062557.105:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:37.219049 kernel: audit: type=1131 audit(1734062557.105:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:37.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:37.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:37.219101 coreos-metadata[940]: Dec 13 04:02:37.115 INFO wrote hostname ci-3510.3.6-a-746d3338a6 to /sysroot/etc/hostname Dec 13 04:02:37.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:37.090120 systemd[1]: Finished flatcar-static-network.service. Dec 13 04:02:37.295661 kernel: audit: type=1130 audit(1734062557.228:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:37.116403 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 04:02:37.810745 ignition[1014]: INFO : GET result: OK Dec 13 04:02:38.378762 ignition[1014]: INFO : Ignition finished successfully Dec 13 04:02:38.381387 systemd[1]: Finished ignition-mount.service. Dec 13 04:02:38.454513 kernel: audit: type=1130 audit(1734062558.396:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:38.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:38.398618 systemd[1]: Starting ignition-files.service... Dec 13 04:02:38.463247 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:02:38.516563 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1032) Dec 13 04:02:38.516575 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:02:38.550703 kernel: BTRFS info (device sdb6): using free space tree Dec 13 04:02:38.550718 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 04:02:38.602487 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 04:02:38.603991 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:02:38.620579 ignition[1051]: INFO : Ignition 2.14.0 Dec 13 04:02:38.620579 ignition[1051]: INFO : Stage: files Dec 13 04:02:38.620579 ignition[1051]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:02:38.620579 ignition[1051]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 04:02:38.620579 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 04:02:38.620579 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Dec 13 04:02:38.709552 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1058) Dec 13 04:02:38.623877 unknown[1051]: wrote ssh authorized keys file for user: core Dec 13 04:02:38.718643 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 04:02:38.718643 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 04:02:38.718643 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4049255911" Dec 13 04:02:38.718643 ignition[1051]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4049255911": device or resource busy Dec 13 04:02:38.718643 ignition[1051]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4049255911", trying btrfs: device or resource busy Dec 13 04:02:38.983807 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4049255911" Dec 13 04:02:38.983807 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4049255911" Dec 13 04:02:38.983807 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem4049255911" Dec 13 04:02:38.983807 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem4049255911" Dec 13 04:02:38.983807 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 04:02:38.983807 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:02:38.983807 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 04:02:39.135432 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 04:02:39.369836 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:02:39.369836 ignition[1051]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:02:39.369836 ignition[1051]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:02:39.369836 ignition[1051]: INFO : files: op(c): [started] processing unit "packet-phone-home.service" Dec 13 04:02:39.369836 ignition[1051]: INFO : files: op(c): [finished] processing unit "packet-phone-home.service" Dec 13 04:02:39.369836 ignition[1051]: INFO : files: op(d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:02:39.449763 ignition[1051]: INFO : files: op(d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:02:39.449763 ignition[1051]: INFO : files: op(e): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 04:02:39.449763 ignition[1051]: INFO : files: op(e): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 04:02:39.449763 ignition[1051]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:02:39.449763 ignition[1051]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:02:39.449763 ignition[1051]: INFO : files: files passed Dec 13 04:02:39.449763 ignition[1051]: INFO : POST message to Packet Timeline Dec 13 04:02:39.449763 ignition[1051]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 04:02:40.518369 ignition[1051]: INFO : GET result: OK Dec 13 04:02:40.925851 ignition[1051]: INFO : Ignition finished successfully Dec 13 04:02:40.928548 systemd[1]: Finished ignition-files.service. Dec 13 04:02:40.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:40.948223 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 04:02:41.020680 kernel: audit: type=1130 audit(1734062560.942:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.009684 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 04:02:41.044687 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:02:41.112543 kernel: audit: type=1130 audit(1734062561.054:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.010009 systemd[1]: Starting ignition-quench.service... Dec 13 04:02:41.233645 kernel: audit: type=1130 audit(1734062561.120:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.233683 kernel: audit: type=1131 audit(1734062561.120:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.027797 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 04:02:41.054943 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 04:02:41.055013 systemd[1]: Finished ignition-quench.service. Dec 13 04:02:41.389244 kernel: audit: type=1130 audit(1734062561.275:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.389258 kernel: audit: type=1131 audit(1734062561.275:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.120698 systemd[1]: Reached target ignition-complete.target. Dec 13 04:02:41.243031 systemd[1]: Starting initrd-parse-etc.service... Dec 13 04:02:41.256569 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 04:02:41.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.256608 systemd[1]: Finished initrd-parse-etc.service. Dec 13 04:02:41.509660 kernel: audit: type=1130 audit(1734062561.438:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.275734 systemd[1]: Reached target initrd-fs.target. Dec 13 04:02:41.397651 systemd[1]: Reached target initrd.target. Dec 13 04:02:41.397710 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 04:02:41.398055 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 04:02:41.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.419773 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 04:02:41.645667 kernel: audit: type=1131 audit(1734062561.569:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.439004 systemd[1]: Starting initrd-cleanup.service... Dec 13 04:02:41.505393 systemd[1]: Stopped target nss-lookup.target. Dec 13 04:02:41.519706 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 04:02:41.536675 systemd[1]: Stopped target timers.target. Dec 13 04:02:41.550719 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 04:02:41.550829 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 04:02:41.569906 systemd[1]: Stopped target initrd.target. Dec 13 04:02:41.638662 systemd[1]: Stopped target basic.target. Dec 13 04:02:41.652744 systemd[1]: Stopped target ignition-complete.target. Dec 13 04:02:41.667716 systemd[1]: Stopped target ignition-diskful.target. Dec 13 04:02:41.683813 systemd[1]: Stopped target initrd-root-device.target. Dec 13 04:02:41.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.700771 systemd[1]: Stopped target remote-fs.target. Dec 13 04:02:41.895651 kernel: audit: type=1131 audit(1734062561.808:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.715848 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 04:02:41.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.732056 systemd[1]: Stopped target sysinit.target. Dec 13 04:02:41.977682 kernel: audit: type=1131 audit(1734062561.903:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.747154 systemd[1]: Stopped target local-fs.target. Dec 13 04:02:41.762136 systemd[1]: Stopped target local-fs-pre.target. Dec 13 04:02:41.777025 systemd[1]: Stopped target swap.target. Dec 13 04:02:41.792029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 04:02:41.792389 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 04:02:41.809262 systemd[1]: Stopped target cryptsetup.target. Dec 13 04:02:42.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.888719 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 04:02:42.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.888804 systemd[1]: Stopped dracut-initqueue.service. Dec 13 04:02:42.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.903717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 04:02:42.116734 ignition[1096]: INFO : Ignition 2.14.0 Dec 13 04:02:42.116734 ignition[1096]: INFO : Stage: umount Dec 13 04:02:42.116734 ignition[1096]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:02:42.116734 ignition[1096]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 04:02:42.116734 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 04:02:42.116734 ignition[1096]: INFO : umount: umount passed Dec 13 04:02:42.116734 ignition[1096]: INFO : POST message to Packet Timeline Dec 13 04:02:42.116734 ignition[1096]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 04:02:42.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:42.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:42.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:42.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:42.258218 iscsid[903]: iscsid shutting down. Dec 13 04:02:42.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.903783 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 04:02:42.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:42.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:41.970848 systemd[1]: Stopped target paths.target. Dec 13 04:02:41.984681 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 04:02:41.987682 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 04:02:42.005761 systemd[1]: Stopped target slices.target. Dec 13 04:02:42.020758 systemd[1]: Stopped target sockets.target. Dec 13 04:02:42.039787 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 04:02:42.039944 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 04:02:42.057048 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 04:02:42.057338 systemd[1]: Stopped ignition-files.service. Dec 13 04:02:42.072232 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 04:02:42.072637 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 04:02:42.093210 systemd[1]: Stopping ignition-mount.service... Dec 13 04:02:42.105805 systemd[1]: Stopping iscsid.service... Dec 13 04:02:42.125611 systemd[1]: Stopping sysroot-boot.service... Dec 13 04:02:42.130830 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 04:02:42.131226 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 04:02:42.152056 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 04:02:42.152384 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 04:02:42.179807 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 04:02:42.181567 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 04:02:42.181807 systemd[1]: Stopped iscsid.service. Dec 13 04:02:42.200073 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 04:02:42.200315 systemd[1]: Stopped sysroot-boot.service. Dec 13 04:02:42.218855 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 04:02:42.219113 systemd[1]: Closed iscsid.socket. Dec 13 04:02:42.233882 systemd[1]: Stopping iscsiuio.service... Dec 13 04:02:42.242248 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 04:02:42.242555 systemd[1]: Stopped iscsiuio.service. Dec 13 04:02:42.265566 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 04:02:42.265806 systemd[1]: Finished initrd-cleanup.service. Dec 13 04:02:42.282716 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 04:02:42.282812 systemd[1]: Closed iscsiuio.socket. Dec 13 04:02:42.596699 ignition[1096]: INFO : GET result: OK Dec 13 04:02:43.042692 ignition[1096]: INFO : Ignition finished successfully Dec 13 04:02:43.045491 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 04:02:43.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.045825 systemd[1]: Stopped ignition-mount.service. Dec 13 04:02:43.060139 systemd[1]: Stopped target network.target. Dec 13 04:02:43.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.075694 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 04:02:43.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.075863 systemd[1]: Stopped ignition-disks.service. Dec 13 04:02:43.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.090799 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 04:02:43.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.090937 systemd[1]: Stopped ignition-kargs.service. Dec 13 04:02:43.105891 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 04:02:43.106049 systemd[1]: Stopped ignition-setup.service. Dec 13 04:02:43.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.121889 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 04:02:43.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.200000 audit: BPF prog-id=6 op=UNLOAD Dec 13 04:02:43.122043 systemd[1]: Stopped initrd-setup-root.service. Dec 13 04:02:43.137116 systemd[1]: Stopping systemd-networkd.service... Dec 13 04:02:43.147581 systemd-networkd[878]: enp1s0f1np1: DHCPv6 lease lost Dec 13 04:02:43.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.151959 systemd[1]: Stopping systemd-resolved.service... Dec 13 04:02:43.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.157653 systemd-networkd[878]: enp1s0f0np0: DHCPv6 lease lost Dec 13 04:02:43.272000 audit: BPF prog-id=9 op=UNLOAD Dec 13 04:02:43.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.166296 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 04:02:43.166553 systemd[1]: Stopped systemd-resolved.service. Dec 13 04:02:43.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.183436 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 04:02:43.183700 systemd[1]: Stopped systemd-networkd.service. Dec 13 04:02:43.199172 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 04:02:43.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.199265 systemd[1]: Closed systemd-networkd.socket. Dec 13 04:02:43.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.219257 systemd[1]: Stopping network-cleanup.service... Dec 13 04:02:43.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.231670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 04:02:43.231818 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 04:02:43.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.248824 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:02:43.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.248957 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:02:43.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.265096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 04:02:43.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.265239 systemd[1]: Stopped systemd-modules-load.service. Dec 13 04:02:43.281009 systemd[1]: Stopping systemd-udevd.service... Dec 13 04:02:43.299497 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 04:02:43.300744 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 04:02:43.300803 systemd[1]: Stopped systemd-udevd.service. Dec 13 04:02:43.305816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 04:02:43.305839 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 04:02:43.326641 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 04:02:43.326681 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 04:02:43.343629 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 04:02:43.343690 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 04:02:43.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:43.358810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 04:02:43.358936 systemd[1]: Stopped dracut-cmdline.service. Dec 13 04:02:43.374793 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:02:43.374923 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 04:02:43.391633 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 04:02:43.405513 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 04:02:43.405545 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 04:02:43.422463 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 04:02:43.422605 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 04:02:43.651456 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Dec 13 04:02:43.437771 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:02:43.437910 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 04:02:43.456319 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 04:02:43.457745 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 04:02:43.457960 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 04:02:43.559934 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 04:02:43.560144 systemd[1]: Stopped network-cleanup.service. Dec 13 04:02:43.572072 systemd[1]: Reached target initrd-switch-root.target. Dec 13 04:02:43.588274 systemd[1]: Starting initrd-switch-root.service... Dec 13 04:02:43.607421 systemd[1]: Switching root. Dec 13 04:02:43.651791 systemd-journald[267]: Journal stopped Dec 13 04:02:47.656240 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 04:02:47.656254 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 04:02:47.656262 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 04:02:47.656268 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 04:02:47.656273 kernel: SELinux: policy capability open_perms=1 Dec 13 04:02:47.656278 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 04:02:47.656284 kernel: SELinux: policy capability always_check_network=0 Dec 13 04:02:47.656289 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 04:02:47.656295 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 04:02:47.656301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 04:02:47.656306 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 04:02:47.656312 systemd[1]: Successfully loaded SELinux policy in 321.791ms. Dec 13 04:02:47.656319 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.371ms. Dec 13 04:02:47.656326 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:02:47.656333 systemd[1]: Detected architecture x86-64. Dec 13 04:02:47.656339 systemd[1]: Detected first boot. Dec 13 04:02:47.656345 systemd[1]: Hostname set to . Dec 13 04:02:47.656351 systemd[1]: Initializing machine ID from random generator. Dec 13 04:02:47.656357 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 04:02:47.656363 systemd[1]: Populated /etc with preset unit settings. Dec 13 04:02:47.656369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:02:47.656376 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:02:47.656383 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:02:47.656389 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 13 04:02:47.656395 kernel: audit: type=1334 audit(1734062565.976:93): prog-id=12 op=LOAD Dec 13 04:02:47.656401 kernel: audit: type=1334 audit(1734062565.977:94): prog-id=3 op=UNLOAD Dec 13 04:02:47.656406 kernel: audit: type=1334 audit(1734062566.021:95): prog-id=13 op=LOAD Dec 13 04:02:47.656412 kernel: audit: type=1334 audit(1734062566.067:96): prog-id=14 op=LOAD Dec 13 04:02:47.656418 kernel: audit: type=1334 audit(1734062566.067:97): prog-id=4 op=UNLOAD Dec 13 04:02:47.656424 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 04:02:47.656430 kernel: audit: type=1334 audit(1734062566.067:98): prog-id=5 op=UNLOAD Dec 13 04:02:47.656436 kernel: audit: type=1131 audit(1734062566.067:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.656444 systemd[1]: Stopped initrd-switch-root.service. Dec 13 04:02:47.656450 kernel: audit: type=1334 audit(1734062566.225:100): prog-id=12 op=UNLOAD Dec 13 04:02:47.656481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 04:02:47.656488 kernel: audit: type=1130 audit(1734062566.240:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.656495 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 04:02:47.656501 kernel: audit: type=1131 audit(1734062566.240:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.656524 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 04:02:47.656531 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 04:02:47.656538 systemd[1]: Created slice system-getty.slice. Dec 13 04:02:47.656545 systemd[1]: Created slice system-modprobe.slice. Dec 13 04:02:47.656551 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 04:02:47.656558 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 04:02:47.656565 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 04:02:47.656571 systemd[1]: Created slice user.slice. Dec 13 04:02:47.656577 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:02:47.656583 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 04:02:47.656590 systemd[1]: Set up automount boot.automount. Dec 13 04:02:47.656596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 04:02:47.656602 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 04:02:47.656608 systemd[1]: Stopped target initrd-fs.target. Dec 13 04:02:47.656615 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 04:02:47.656622 systemd[1]: Reached target integritysetup.target. Dec 13 04:02:47.656628 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:02:47.656634 systemd[1]: Reached target remote-fs.target. Dec 13 04:02:47.656640 systemd[1]: Reached target slices.target. Dec 13 04:02:47.656647 systemd[1]: Reached target swap.target. Dec 13 04:02:47.656653 systemd[1]: Reached target torcx.target. Dec 13 04:02:47.656659 systemd[1]: Reached target veritysetup.target. Dec 13 04:02:47.656667 systemd[1]: Listening on systemd-coredump.socket. Dec 13 04:02:47.656673 systemd[1]: Listening on systemd-initctl.socket. Dec 13 04:02:47.656679 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:02:47.656686 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:02:47.656693 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:02:47.656700 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 04:02:47.656706 systemd[1]: Mounting dev-hugepages.mount... Dec 13 04:02:47.656713 systemd[1]: Mounting dev-mqueue.mount... Dec 13 04:02:47.656719 systemd[1]: Mounting media.mount... Dec 13 04:02:47.656726 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:02:47.656732 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 04:02:47.656739 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 04:02:47.656745 systemd[1]: Mounting tmp.mount... Dec 13 04:02:47.656752 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 04:02:47.656759 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:02:47.656765 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:02:47.656772 systemd[1]: Starting modprobe@configfs.service... Dec 13 04:02:47.656778 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:02:47.656784 systemd[1]: Starting modprobe@drm.service... Dec 13 04:02:47.656791 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:02:47.656797 systemd[1]: Starting modprobe@fuse.service... Dec 13 04:02:47.656803 kernel: fuse: init (API version 7.34) Dec 13 04:02:47.656810 systemd[1]: Starting modprobe@loop.service... Dec 13 04:02:47.656817 kernel: loop: module loaded Dec 13 04:02:47.656823 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 04:02:47.656829 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 04:02:47.656836 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 04:02:47.656842 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 04:02:47.656849 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 04:02:47.656855 systemd[1]: Stopped systemd-journald.service. Dec 13 04:02:47.656861 systemd[1]: Starting systemd-journald.service... Dec 13 04:02:47.656869 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:02:47.656877 systemd-journald[1251]: Journal started Dec 13 04:02:47.656902 systemd-journald[1251]: Runtime Journal (/run/log/journal/b6b3c8608f0c406dbb0c8ee66771bcc7) is 8.0M, max 640.1M, 632.1M free. Dec 13 04:02:44.057000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 04:02:44.355000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 04:02:44.358000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:02:44.358000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:02:44.358000 audit: BPF prog-id=10 op=LOAD Dec 13 04:02:44.358000 audit: BPF prog-id=10 op=UNLOAD Dec 13 04:02:44.358000 audit: BPF prog-id=11 op=LOAD Dec 13 04:02:44.358000 audit: BPF prog-id=11 op=UNLOAD Dec 13 04:02:44.427000 audit[1138]: AVC avc: denied { associate } for pid=1138 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 04:02:44.427000 audit[1138]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d98a2 a1=c00015adf8 a2=c0001630c0 a3=32 items=0 ppid=1121 pid=1138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:02:44.427000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:02:44.453000 audit[1138]: AVC avc: denied { associate } for pid=1138 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 04:02:44.453000 audit[1138]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9979 a2=1ed a3=0 items=2 ppid=1121 pid=1138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:02:44.453000 audit: CWD cwd="/" Dec 13 04:02:44.453000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:44.453000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:44.453000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:02:45.976000 audit: BPF prog-id=12 op=LOAD Dec 13 04:02:45.977000 audit: BPF prog-id=3 op=UNLOAD Dec 13 04:02:46.021000 audit: BPF prog-id=13 op=LOAD Dec 13 04:02:46.067000 audit: BPF prog-id=14 op=LOAD Dec 13 04:02:46.067000 audit: BPF prog-id=4 op=UNLOAD Dec 13 04:02:46.067000 audit: BPF prog-id=5 op=UNLOAD Dec 13 04:02:46.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:46.225000 audit: BPF prog-id=12 op=UNLOAD Dec 13 04:02:46.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:46.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.629000 audit: BPF prog-id=15 op=LOAD Dec 13 04:02:47.630000 audit: BPF prog-id=16 op=LOAD Dec 13 04:02:47.630000 audit: BPF prog-id=17 op=LOAD Dec 13 04:02:47.630000 audit: BPF prog-id=13 op=UNLOAD Dec 13 04:02:47.630000 audit: BPF prog-id=14 op=UNLOAD Dec 13 04:02:47.653000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 04:02:47.653000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc8b3601f0 a2=4000 a3=7ffc8b36028c items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:02:47.653000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 04:02:45.975470 systemd[1]: Queued start job for default target multi-user.target. Dec 13 04:02:44.425455 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:02:46.067957 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 04:02:44.425952 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:02:44.425970 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:02:44.425994 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 04:02:44.426003 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 04:02:44.426029 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 04:02:44.426039 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 04:02:44.426186 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 04:02:44.426216 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:02:44.426227 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:02:44.426994 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 04:02:44.427021 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 04:02:44.427035 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 04:02:44.427046 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 04:02:44.427059 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 04:02:44.427070 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 04:02:45.620770 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:45Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:02:45.620920 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:45Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:02:45.620977 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:45Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:02:45.621071 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:45Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:02:45.621101 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:45Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 04:02:45.621136 /usr/lib/systemd/system-generators/torcx-generator[1138]: time="2024-12-13T04:02:45Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 04:02:47.687631 systemd[1]: Starting systemd-network-generator.service... Dec 13 04:02:47.709496 systemd[1]: Starting systemd-remount-fs.service... Dec 13 04:02:47.731513 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:02:47.764047 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 04:02:47.764069 systemd[1]: Stopped verity-setup.service. Dec 13 04:02:47.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.798486 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:02:47.813634 systemd[1]: Started systemd-journald.service. Dec 13 04:02:47.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.821980 systemd[1]: Mounted dev-hugepages.mount. Dec 13 04:02:47.829709 systemd[1]: Mounted dev-mqueue.mount. Dec 13 04:02:47.837703 systemd[1]: Mounted media.mount. Dec 13 04:02:47.845711 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 04:02:47.854665 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 04:02:47.862648 systemd[1]: Mounted tmp.mount. Dec 13 04:02:47.869745 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 04:02:47.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.877773 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:02:47.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.886795 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 04:02:47.886914 systemd[1]: Finished modprobe@configfs.service. Dec 13 04:02:47.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.895925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:02:47.896086 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:02:47.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.905081 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:02:47.905271 systemd[1]: Finished modprobe@drm.service. Dec 13 04:02:47.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.914270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:02:47.914596 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:02:47.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.923267 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 04:02:47.923582 systemd[1]: Finished modprobe@fuse.service. Dec 13 04:02:47.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.932255 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:02:47.932629 systemd[1]: Finished modprobe@loop.service. Dec 13 04:02:47.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.941376 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:02:47.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.950247 systemd[1]: Finished systemd-network-generator.service. Dec 13 04:02:47.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.959200 systemd[1]: Finished systemd-remount-fs.service. Dec 13 04:02:47.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.968242 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:02:47.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:47.977975 systemd[1]: Reached target network-pre.target. Dec 13 04:02:47.989220 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 04:02:47.998149 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 04:02:48.005641 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 04:02:48.006688 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 04:02:48.015126 systemd[1]: Starting systemd-journal-flush.service... Dec 13 04:02:48.018322 systemd-journald[1251]: Time spent on flushing to /var/log/journal/b6b3c8608f0c406dbb0c8ee66771bcc7 is 14.894ms for 1571 entries. Dec 13 04:02:48.018322 systemd-journald[1251]: System Journal (/var/log/journal/b6b3c8608f0c406dbb0c8ee66771bcc7) is 8.0M, max 195.6M, 187.6M free. Dec 13 04:02:48.065258 systemd-journald[1251]: Received client request to flush runtime journal. Dec 13 04:02:48.031572 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:02:48.032061 systemd[1]: Starting systemd-random-seed.service... Dec 13 04:02:48.042570 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:02:48.043073 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:02:48.051037 systemd[1]: Starting systemd-sysusers.service... Dec 13 04:02:48.058042 systemd[1]: Starting systemd-udev-settle.service... Dec 13 04:02:48.065795 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 04:02:48.073664 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 04:02:48.081688 systemd[1]: Finished systemd-journal-flush.service. Dec 13 04:02:48.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.089740 systemd[1]: Finished systemd-random-seed.service. Dec 13 04:02:48.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.097715 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:02:48.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.105705 systemd[1]: Finished systemd-sysusers.service. Dec 13 04:02:48.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.114674 systemd[1]: Reached target first-boot-complete.target. Dec 13 04:02:48.123226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 04:02:48.132577 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 04:02:48.140386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 04:02:48.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.313790 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 04:02:48.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.322000 audit: BPF prog-id=18 op=LOAD Dec 13 04:02:48.322000 audit: BPF prog-id=19 op=LOAD Dec 13 04:02:48.322000 audit: BPF prog-id=7 op=UNLOAD Dec 13 04:02:48.322000 audit: BPF prog-id=8 op=UNLOAD Dec 13 04:02:48.323746 systemd[1]: Starting systemd-udevd.service... Dec 13 04:02:48.335657 systemd-udevd[1271]: Using default interface naming scheme 'v252'. Dec 13 04:02:48.351332 systemd[1]: Started systemd-udevd.service. Dec 13 04:02:48.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.362368 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Dec 13 04:02:48.362000 audit: BPF prog-id=20 op=LOAD Dec 13 04:02:48.363735 systemd[1]: Starting systemd-networkd.service... Dec 13 04:02:48.394876 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 04:02:48.394939 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 04:02:48.399459 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 04:02:48.412000 audit: BPF prog-id=21 op=LOAD Dec 13 04:02:48.429445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 04:02:48.429000 audit: BPF prog-id=22 op=LOAD Dec 13 04:02:48.429000 audit: BPF prog-id=23 op=LOAD Dec 13 04:02:48.430290 systemd[1]: Starting systemd-userdbd.service... Dec 13 04:02:48.430464 kernel: IPMI message handler: version 39.2 Dec 13 04:02:48.444528 kernel: ACPI: button: Power Button [PWRF] Dec 13 04:02:48.458469 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1281) Dec 13 04:02:48.403000 audit[1285]: AVC avc: denied { confidentiality } for pid=1285 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 04:02:48.403000 audit[1285]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a6a63309b0 a1=4d98c a2=7fe7c8923bc5 a3=5 items=42 ppid=1271 pid=1285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:02:48.403000 audit: CWD cwd="/" Dec 13 04:02:48.403000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=1 name=(null) inode=19300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=2 name=(null) inode=19300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=3 name=(null) inode=19301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=4 name=(null) inode=19300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=5 name=(null) inode=19302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=6 name=(null) inode=19300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=7 name=(null) inode=19303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=8 name=(null) inode=19303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=9 name=(null) inode=19304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=10 name=(null) inode=19303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=11 name=(null) inode=19305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=12 name=(null) inode=19303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=13 name=(null) inode=19306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=14 name=(null) inode=19303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=15 name=(null) inode=19307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=16 name=(null) inode=19303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=17 name=(null) inode=19308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=18 name=(null) inode=19300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=19 name=(null) inode=19309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=20 name=(null) inode=19309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=21 name=(null) inode=19310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=22 name=(null) inode=19309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=23 name=(null) inode=19311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=24 name=(null) inode=19309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=25 name=(null) inode=19312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=26 name=(null) inode=19309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=27 name=(null) inode=19313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=28 name=(null) inode=19309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=29 name=(null) inode=19314 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=30 name=(null) inode=19300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=31 name=(null) inode=19315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=32 name=(null) inode=19315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=33 name=(null) inode=19316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=34 name=(null) inode=19315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=35 name=(null) inode=19317 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=36 name=(null) inode=19315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=37 name=(null) inode=19318 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=38 name=(null) inode=19315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=39 name=(null) inode=19319 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=40 name=(null) inode=19315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PATH item=41 name=(null) inode=19320 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:02:48.403000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 04:02:48.494463 kernel: ipmi device interface Dec 13 04:02:48.515451 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 04:02:48.515611 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 04:02:48.516138 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:02:48.573983 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 04:02:48.607238 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 04:02:48.607342 kernel: ipmi_si: IPMI System Interface driver Dec 13 04:02:48.607357 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Dec 13 04:02:48.607442 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 04:02:48.655383 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 04:02:48.655400 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 04:02:48.655411 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 04:02:48.759253 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 04:02:48.759387 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 04:02:48.759402 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 04:02:48.759474 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 04:02:48.759488 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 04:02:48.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.675226 systemd[1]: Started systemd-userdbd.service. Dec 13 04:02:48.800463 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Dec 13 04:02:48.818423 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Dec 13 04:02:48.818505 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 04:02:48.817374 systemd-networkd[1323]: bond0: netdev ready Dec 13 04:02:48.822129 systemd-networkd[1323]: lo: Link UP Dec 13 04:02:48.822141 systemd-networkd[1323]: lo: Gained carrier Dec 13 04:02:48.823202 systemd-networkd[1323]: Enumeration completed Dec 13 04:02:48.823303 systemd[1]: Started systemd-networkd.service. Dec 13 04:02:48.825053 systemd-networkd[1323]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 04:02:48.826013 systemd-networkd[1323]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2d.network. Dec 13 04:02:48.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:48.887070 kernel: intel_rapl_common: Found RAPL domain package Dec 13 04:02:48.887104 kernel: intel_rapl_common: Found RAPL domain core Dec 13 04:02:48.887116 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 04:02:48.902996 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Dec 13 04:02:48.972483 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 04:02:48.990446 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 04:02:49.211527 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 04:02:49.234484 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Dec 13 04:02:49.236336 systemd-networkd[1323]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Dec 13 04:02:49.260495 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 04:02:49.388614 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 04:02:49.440491 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 04:02:49.463443 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Dec 13 04:02:49.482466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 04:02:49.485702 systemd[1]: Finished systemd-udev-settle.service. Dec 13 04:02:49.490971 systemd-networkd[1323]: bond0: Link UP Dec 13 04:02:49.491161 systemd-networkd[1323]: enp1s0f1np1: Link UP Dec 13 04:02:49.491315 systemd-networkd[1323]: enp1s0f1np1: Gained carrier Dec 13 04:02:49.492350 systemd-networkd[1323]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Dec 13 04:02:49.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:49.494170 systemd[1]: Starting lvm2-activation-early.service... Dec 13 04:02:49.510406 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:02:49.541444 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 04:02:49.541469 kernel: bond0: active interface up! Dec 13 04:02:49.567472 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Dec 13 04:02:49.581916 systemd[1]: Finished lvm2-activation-early.service. Dec 13 04:02:49.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:49.590614 systemd[1]: Reached target cryptsetup.target. Dec 13 04:02:49.599171 systemd[1]: Starting lvm2-activation.service... Dec 13 04:02:49.601284 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:02:49.630858 systemd[1]: Finished lvm2-activation.service. Dec 13 04:02:49.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:49.638563 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:02:49.646490 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 04:02:49.646505 systemd[1]: Reached target local-fs.target. Dec 13 04:02:49.654519 systemd[1]: Reached target machines.target. Dec 13 04:02:49.663068 systemd[1]: Starting ldconfig.service... Dec 13 04:02:49.670273 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:02:49.670294 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:02:49.670823 systemd[1]: Starting systemd-boot-update.service... Dec 13 04:02:49.687017 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 04:02:49.694446 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.695082 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 04:02:49.695768 systemd[1]: Starting systemd-sysext.service... Dec 13 04:02:49.695963 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1382 (bootctl) Dec 13 04:02:49.696515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 04:02:49.717490 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.739720 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 04:02:49.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:49.740451 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.741795 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 04:02:49.762478 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.762477 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 04:02:49.762560 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 04:02:49.784477 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.784508 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 04:02:49.800486 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.841444 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.842837 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 04:02:49.843142 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 04:02:49.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:49.863444 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.884445 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.884479 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 04:02:49.899865 systemd-fsck[1391]: fsck.fat 4.2 (2021-01-31) Dec 13 04:02:49.899865 systemd-fsck[1391]: /dev/sdb1: 789 files, 119291/258078 clusters Dec 13 04:02:49.900448 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.908780 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 04:02:49.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:49.939500 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.939681 systemd[1]: Mounting boot.mount... Dec 13 04:02:49.954227 systemd[1]: Mounted boot.mount. Dec 13 04:02:49.961446 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.982450 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:49.987605 systemd[1]: Finished systemd-boot-update.service. Dec 13 04:02:50.001445 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:50.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.021448 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:50.040446 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:50.040472 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 04:02:50.055444 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:50.070268 (sd-sysext)[1395]: Using extensions 'kubernetes'. Dec 13 04:02:50.070447 (sd-sysext)[1395]: Merged extensions into '/usr'. Dec 13 04:02:50.070716 ldconfig[1381]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 04:02:50.072441 systemd[1]: Finished ldconfig.service. Dec 13 04:02:50.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.091480 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:50.097324 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:02:50.098118 systemd[1]: Mounting usr-share-oem.mount... Dec 13 04:02:50.110486 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:50.111098 systemd-networkd[1323]: enp1s0f0np0: Link UP Dec 13 04:02:50.111257 systemd-networkd[1323]: bond0: Gained carrier Dec 13 04:02:50.111343 systemd-networkd[1323]: enp1s0f0np0: Gained carrier Dec 13 04:02:50.125651 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.126270 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:02:50.130493 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 04:02:50.130521 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Dec 13 04:02:50.147749 systemd-networkd[1323]: enp1s0f1np1: Link DOWN Dec 13 04:02:50.147751 systemd-networkd[1323]: enp1s0f1np1: Lost carrier Dec 13 04:02:50.152066 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:02:50.159031 systemd[1]: Starting modprobe@loop.service... Dec 13 04:02:50.165556 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.165625 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:02:50.165688 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:02:50.167216 systemd[1]: Mounted usr-share-oem.mount. Dec 13 04:02:50.173695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:02:50.173757 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:02:50.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.181723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:02:50.181782 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:02:50.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.189704 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:02:50.189763 systemd[1]: Finished modprobe@loop.service. Dec 13 04:02:50.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.197732 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:02:50.197792 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.198324 systemd[1]: Finished systemd-sysext.service. Dec 13 04:02:50.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.207135 systemd[1]: Starting ensure-sysext.service... Dec 13 04:02:50.214044 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 04:02:50.219862 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 04:02:50.221200 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 04:02:50.222187 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 04:02:50.223654 systemd[1]: Reloading. Dec 13 04:02:50.248186 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2024-12-13T04:02:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:02:50.248202 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2024-12-13T04:02:50Z" level=info msg="torcx already run" Dec 13 04:02:50.302331 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:02:50.302339 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:02:50.308442 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 04:02:50.313491 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:02:50.325441 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Dec 13 04:02:50.327491 systemd-networkd[1323]: enp1s0f1np1: Link UP Dec 13 04:02:50.327645 systemd-networkd[1323]: enp1s0f1np1: Gained carrier Dec 13 04:02:50.351443 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Dec 13 04:02:50.367000 audit: BPF prog-id=24 op=LOAD Dec 13 04:02:50.367000 audit: BPF prog-id=21 op=UNLOAD Dec 13 04:02:50.367000 audit: BPF prog-id=25 op=LOAD Dec 13 04:02:50.367000 audit: BPF prog-id=26 op=LOAD Dec 13 04:02:50.367000 audit: BPF prog-id=22 op=UNLOAD Dec 13 04:02:50.367000 audit: BPF prog-id=23 op=UNLOAD Dec 13 04:02:50.368442 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 04:02:50.369000 audit: BPF prog-id=27 op=LOAD Dec 13 04:02:50.369000 audit: BPF prog-id=15 op=UNLOAD Dec 13 04:02:50.369000 audit: BPF prog-id=28 op=LOAD Dec 13 04:02:50.369000 audit: BPF prog-id=29 op=LOAD Dec 13 04:02:50.369000 audit: BPF prog-id=16 op=UNLOAD Dec 13 04:02:50.369000 audit: BPF prog-id=17 op=UNLOAD Dec 13 04:02:50.369000 audit: BPF prog-id=30 op=LOAD Dec 13 04:02:50.369000 audit: BPF prog-id=20 op=UNLOAD Dec 13 04:02:50.370000 audit: BPF prog-id=31 op=LOAD Dec 13 04:02:50.370000 audit: BPF prog-id=32 op=LOAD Dec 13 04:02:50.370000 audit: BPF prog-id=18 op=UNLOAD Dec 13 04:02:50.370000 audit: BPF prog-id=19 op=UNLOAD Dec 13 04:02:50.371869 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 04:02:50.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:02:50.382354 systemd[1]: Starting audit-rules.service... Dec 13 04:02:50.390026 systemd[1]: Starting clean-ca-certificates.service... Dec 13 04:02:50.399000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 04:02:50.399000 audit[1501]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd86e40c20 a2=420 a3=0 items=0 ppid=1485 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:02:50.399000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 04:02:50.400046 augenrules[1501]: No rules Dec 13 04:02:50.400106 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 04:02:50.410486 systemd[1]: Starting systemd-resolved.service... Dec 13 04:02:50.418520 systemd[1]: Starting systemd-timesyncd.service... Dec 13 04:02:50.426111 systemd[1]: Starting systemd-update-utmp.service... Dec 13 04:02:50.432816 systemd[1]: Finished audit-rules.service. Dec 13 04:02:50.439651 systemd[1]: Finished clean-ca-certificates.service. Dec 13 04:02:50.448600 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 04:02:50.461640 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.462322 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:02:50.470101 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:02:50.477081 systemd[1]: Starting modprobe@loop.service... Dec 13 04:02:50.483529 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.483648 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:02:50.484401 systemd[1]: Starting systemd-update-done.service... Dec 13 04:02:50.489593 systemd-resolved[1507]: Positive Trust Anchors: Dec 13 04:02:50.489599 systemd-resolved[1507]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:02:50.489618 systemd-resolved[1507]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:02:50.491527 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:02:50.492294 systemd[1]: Started systemd-timesyncd.service. Dec 13 04:02:50.493776 systemd-resolved[1507]: Using system hostname 'ci-3510.3.6-a-746d3338a6'. Dec 13 04:02:50.500809 systemd[1]: Started systemd-resolved.service. Dec 13 04:02:50.508769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:02:50.508839 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:02:50.516765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:02:50.516829 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:02:50.524760 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:02:50.524822 systemd[1]: Finished modprobe@loop.service. Dec 13 04:02:50.532761 systemd[1]: Finished systemd-update-done.service. Dec 13 04:02:50.540829 systemd[1]: Reached target network.target. Dec 13 04:02:50.548611 systemd[1]: Reached target nss-lookup.target. Dec 13 04:02:50.556581 systemd[1]: Reached target time-set.target. Dec 13 04:02:50.564558 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:02:50.564614 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.564888 systemd[1]: Finished systemd-update-utmp.service. Dec 13 04:02:50.574737 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.575395 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:02:50.583066 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:02:50.590021 systemd[1]: Starting modprobe@loop.service... Dec 13 04:02:50.596535 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.596604 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:02:50.596659 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:02:50.597116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:02:50.597179 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:02:50.605716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:02:50.605774 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:02:50.613695 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:02:50.613752 systemd[1]: Finished modprobe@loop.service. Dec 13 04:02:50.621691 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:02:50.621760 systemd[1]: Reached target sysinit.target. Dec 13 04:02:50.629603 systemd[1]: Started motdgen.path. Dec 13 04:02:50.636579 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 04:02:50.646748 systemd[1]: Started logrotate.timer. Dec 13 04:02:50.653610 systemd[1]: Started mdadm.timer. Dec 13 04:02:50.660565 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 04:02:50.668540 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 04:02:50.668599 systemd[1]: Reached target paths.target. Dec 13 04:02:50.675557 systemd[1]: Reached target timers.target. Dec 13 04:02:50.682712 systemd[1]: Listening on dbus.socket. Dec 13 04:02:50.690054 systemd[1]: Starting docker.socket... Dec 13 04:02:50.698033 systemd[1]: Listening on sshd.socket. Dec 13 04:02:50.704631 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:02:50.704694 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.705383 systemd[1]: Listening on docker.socket. Dec 13 04:02:50.713358 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 04:02:50.713420 systemd[1]: Reached target sockets.target. Dec 13 04:02:50.721569 systemd[1]: Reached target basic.target. Dec 13 04:02:50.728569 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.728621 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:02:50.729152 systemd[1]: Starting containerd.service... Dec 13 04:02:50.738143 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 04:02:50.747070 systemd[1]: Starting coreos-metadata.service... Dec 13 04:02:50.754086 systemd[1]: Starting dbus.service... Dec 13 04:02:50.760277 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 04:02:50.765989 jq[1528]: false Dec 13 04:02:50.767261 systemd[1]: Starting extend-filesystems.service... Dec 13 04:02:50.771653 coreos-metadata[1521]: Dec 13 04:02:50.771 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 04:02:50.773542 dbus-daemon[1527]: [system] SELinux support is enabled Dec 13 04:02:50.773572 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 04:02:50.774619 systemd[1]: Starting modprobe@drm.service... Dec 13 04:02:50.776162 extend-filesystems[1529]: Found loop1 Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sda Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb1 Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb2 Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb3 Dec 13 04:02:50.796581 extend-filesystems[1529]: Found usr Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb4 Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb6 Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb7 Dec 13 04:02:50.796581 extend-filesystems[1529]: Found sdb9 Dec 13 04:02:50.796581 extend-filesystems[1529]: Checking size of /dev/sdb9 Dec 13 04:02:50.796581 extend-filesystems[1529]: Resized partition /dev/sdb9 Dec 13 04:02:50.959555 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Dec 13 04:02:50.959589 coreos-metadata[1524]: Dec 13 04:02:50.778 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 04:02:50.782310 systemd[1]: Starting motdgen.service... Dec 13 04:02:50.959747 extend-filesystems[1539]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 04:02:50.814256 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 04:02:50.835055 systemd[1]: Starting sshd-keygen.service... Dec 13 04:02:50.851044 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 04:02:50.867472 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:02:50.974850 update_engine[1558]: I1213 04:02:50.934444 1558 main.cc:92] Flatcar Update Engine starting Dec 13 04:02:50.974850 update_engine[1558]: I1213 04:02:50.937677 1558 update_check_scheduler.cc:74] Next update check in 5m30s Dec 13 04:02:50.868079 systemd[1]: Starting tcsd.service... Dec 13 04:02:50.975035 jq[1559]: true Dec 13 04:02:50.887793 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 04:02:50.888194 systemd[1]: Starting update-engine.service... Dec 13 04:02:50.904046 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 04:02:50.921712 systemd[1]: Started dbus.service. Dec 13 04:02:50.936486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 04:02:50.936601 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 04:02:50.936833 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:02:50.936896 systemd[1]: Finished modprobe@drm.service. Dec 13 04:02:50.951804 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 04:02:50.951883 systemd[1]: Finished motdgen.service. Dec 13 04:02:50.966794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 04:02:50.966868 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 04:02:50.986388 jq[1561]: true Dec 13 04:02:50.986899 systemd[1]: Finished ensure-sysext.service. Dec 13 04:02:50.996031 env[1562]: time="2024-12-13T04:02:50.996005651Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 04:02:51.002703 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 04:02:51.002795 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 04:02:51.004918 env[1562]: time="2024-12-13T04:02:51.004893479Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 04:02:51.005046 env[1562]: time="2024-12-13T04:02:51.004992551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:02:51.005721 env[1562]: time="2024-12-13T04:02:51.005698792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:02:51.005721 env[1562]: time="2024-12-13T04:02:51.005713995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:02:51.005862 env[1562]: time="2024-12-13T04:02:51.005821789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:02:51.005862 env[1562]: time="2024-12-13T04:02:51.005832006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 04:02:51.005862 env[1562]: time="2024-12-13T04:02:51.005838949Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 04:02:51.005862 env[1562]: time="2024-12-13T04:02:51.005844012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 04:02:51.005934 env[1562]: time="2024-12-13T04:02:51.005883448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:02:51.006038 env[1562]: time="2024-12-13T04:02:51.006005584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:02:51.006144 env[1562]: time="2024-12-13T04:02:51.006110680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:02:51.006144 env[1562]: time="2024-12-13T04:02:51.006119489Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 04:02:51.006204 env[1562]: time="2024-12-13T04:02:51.006148200Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 04:02:51.006204 env[1562]: time="2024-12-13T04:02:51.006156008Z" level=info msg="metadata content store policy set" policy=shared Dec 13 04:02:51.008095 systemd[1]: Started update-engine.service. Dec 13 04:02:51.017771 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:02:51.018737 systemd[1]: Started locksmithd.service. Dec 13 04:02:51.025558 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 04:02:51.025577 systemd[1]: Reached target system-config.target. Dec 13 04:02:51.039946 systemd[1]: Starting systemd-logind.service... Dec 13 04:02:51.040332 env[1562]: time="2024-12-13T04:02:51.040312135Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 04:02:51.040360 env[1562]: time="2024-12-13T04:02:51.040340907Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 04:02:51.040360 env[1562]: time="2024-12-13T04:02:51.040354678Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 04:02:51.040398 env[1562]: time="2024-12-13T04:02:51.040380293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.040398 env[1562]: time="2024-12-13T04:02:51.040393898Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.040432 env[1562]: time="2024-12-13T04:02:51.040408260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.040432 env[1562]: time="2024-12-13T04:02:51.040420003Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.046886 env[1562]: time="2024-12-13T04:02:51.040431755Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.046886 env[1562]: time="2024-12-13T04:02:51.040446613Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.046886 env[1562]: time="2024-12-13T04:02:51.040459863Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.046886 env[1562]: time="2024-12-13T04:02:51.040471069Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.046886 env[1562]: time="2024-12-13T04:02:51.040483083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 04:02:51.046513 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 04:02:51.047005 env[1562]: time="2024-12-13T04:02:51.046892598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 04:02:51.047005 env[1562]: time="2024-12-13T04:02:51.046945028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 04:02:51.046532 systemd[1]: Reached target user-config.target. Dec 13 04:02:51.047386 env[1562]: time="2024-12-13T04:02:51.047325856Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 04:02:51.047419 env[1562]: time="2024-12-13T04:02:51.047400363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047419 env[1562]: time="2024-12-13T04:02:51.047409823Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 04:02:51.047459 env[1562]: time="2024-12-13T04:02:51.047437413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047459 env[1562]: time="2024-12-13T04:02:51.047450287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047459 env[1562]: time="2024-12-13T04:02:51.047457076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047504 env[1562]: time="2024-12-13T04:02:51.047463711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047504 env[1562]: time="2024-12-13T04:02:51.047469988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047504 env[1562]: time="2024-12-13T04:02:51.047476349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047504 env[1562]: time="2024-12-13T04:02:51.047482891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047504 env[1562]: time="2024-12-13T04:02:51.047488877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047504 env[1562]: time="2024-12-13T04:02:51.047496199Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 04:02:51.047599 env[1562]: time="2024-12-13T04:02:51.047561184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047599 env[1562]: time="2024-12-13T04:02:51.047570089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047599 env[1562]: time="2024-12-13T04:02:51.047576445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047599 env[1562]: time="2024-12-13T04:02:51.047582656Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 04:02:51.047599 env[1562]: time="2024-12-13T04:02:51.047591119Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 04:02:51.047682 env[1562]: time="2024-12-13T04:02:51.047600052Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 04:02:51.047682 env[1562]: time="2024-12-13T04:02:51.047611015Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 04:02:51.047682 env[1562]: time="2024-12-13T04:02:51.047632588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 04:02:51.047767 env[1562]: time="2024-12-13T04:02:51.047740831Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.047773057Z" level=info msg="Connect containerd service" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.047793787Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048062067Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048155684Z" level=info msg="Start subscribing containerd event" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048179989Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048190777Z" level=info msg="Start recovering state" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048202294Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048224322Z" level=info msg="containerd successfully booted in 0.052587s" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048235607Z" level=info msg="Start event monitor" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048250233Z" level=info msg="Start snapshots syncer" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048259428Z" level=info msg="Start cni network conf syncer for default" Dec 13 04:02:51.049683 env[1562]: time="2024-12-13T04:02:51.048266460Z" level=info msg="Start streaming server" Dec 13 04:02:51.054483 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:02:51.054621 systemd[1]: Started containerd.service. Dec 13 04:02:51.056381 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:02:51.062703 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 04:02:51.063585 systemd-logind[1597]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 04:02:51.063596 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 04:02:51.063606 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 04:02:51.063726 systemd-logind[1597]: New seat seat0. Dec 13 04:02:51.072711 systemd[1]: Started systemd-logind.service. Dec 13 04:02:51.079358 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 04:02:51.112604 systemd-networkd[1323]: bond0: Gained IPv6LL Dec 13 04:02:51.163778 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 04:02:51.175497 systemd[1]: Finished sshd-keygen.service. Dec 13 04:02:51.184443 systemd[1]: Starting issuegen.service... Dec 13 04:02:51.191702 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 04:02:51.191778 systemd[1]: Finished issuegen.service. Dec 13 04:02:51.199382 systemd[1]: Starting systemd-user-sessions.service... Dec 13 04:02:51.207717 systemd[1]: Finished systemd-user-sessions.service. Dec 13 04:02:51.216254 systemd[1]: Started getty@tty1.service. Dec 13 04:02:51.223172 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 04:02:51.231635 systemd[1]: Reached target getty.target. Dec 13 04:02:51.368911 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 04:02:51.382518 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Dec 13 04:02:51.391715 systemd[1]: Reached target network-online.target. Dec 13 04:02:51.402095 systemd[1]: Starting kubelet.service... Dec 13 04:02:51.413863 extend-filesystems[1539]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Dec 13 04:02:51.413863 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 04:02:51.413863 extend-filesystems[1539]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Dec 13 04:02:51.452520 extend-filesystems[1529]: Resized filesystem in /dev/sdb9 Dec 13 04:02:51.414690 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 04:02:51.414798 systemd[1]: Finished extend-filesystems.service. Dec 13 04:02:52.126275 systemd[1]: Started kubelet.service. Dec 13 04:02:52.537510 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 04:02:52.607533 kubelet[1627]: E1213 04:02:52.607466 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:02:52.608870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:02:52.608965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:02:56.245400 login[1620]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 04:02:56.252962 systemd[1]: Created slice user-500.slice. Dec 13 04:02:56.253122 login[1619]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 04:02:56.253631 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 04:02:56.254673 systemd-logind[1597]: New session 1 of user core. Dec 13 04:02:56.256889 systemd-logind[1597]: New session 2 of user core. Dec 13 04:02:56.258983 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 04:02:56.259709 systemd[1]: Starting user@500.service... Dec 13 04:02:56.261680 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:56.335839 systemd[1650]: Queued start job for default target default.target. Dec 13 04:02:56.336120 systemd[1650]: Reached target paths.target. Dec 13 04:02:56.336137 systemd[1650]: Reached target sockets.target. Dec 13 04:02:56.336151 systemd[1650]: Reached target timers.target. Dec 13 04:02:56.336162 systemd[1650]: Reached target basic.target. Dec 13 04:02:56.336190 systemd[1650]: Reached target default.target. Dec 13 04:02:56.336215 systemd[1650]: Startup finished in 71ms. Dec 13 04:02:56.336250 systemd[1]: Started user@500.service. Dec 13 04:02:56.336895 systemd[1]: Started session-1.scope. Dec 13 04:02:56.337209 systemd[1]: Started session-2.scope. Dec 13 04:02:56.616595 coreos-metadata[1521]: Dec 13 04:02:56.616 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 04:02:56.617360 coreos-metadata[1524]: Dec 13 04:02:56.616 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 04:02:57.616874 coreos-metadata[1521]: Dec 13 04:02:57.616 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 04:02:57.617736 coreos-metadata[1524]: Dec 13 04:02:57.616 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 04:02:57.661844 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Dec 13 04:02:57.661998 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Dec 13 04:02:58.672359 systemd[1]: Created slice system-sshd.slice. Dec 13 04:02:58.673053 systemd[1]: Started sshd@0-147.28.180.253:22-139.178.68.195:38936.service. Dec 13 04:02:58.717971 sshd[1671]: Accepted publickey for core from 139.178.68.195 port 38936 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:02:58.719240 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:58.723774 systemd-logind[1597]: New session 3 of user core. Dec 13 04:02:58.724831 systemd[1]: Started session-3.scope. Dec 13 04:02:58.782361 systemd[1]: Started sshd@1-147.28.180.253:22-139.178.68.195:49772.service. Dec 13 04:02:58.814787 sshd[1676]: Accepted publickey for core from 139.178.68.195 port 49772 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:02:58.815447 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:58.817839 systemd-logind[1597]: New session 4 of user core. Dec 13 04:02:58.818270 systemd[1]: Started session-4.scope. Dec 13 04:02:58.869406 sshd[1676]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:58.871023 systemd[1]: sshd@1-147.28.180.253:22-139.178.68.195:49772.service: Deactivated successfully. Dec 13 04:02:58.871319 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 04:02:58.871793 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. Dec 13 04:02:58.872262 systemd[1]: Started sshd@2-147.28.180.253:22-139.178.68.195:49776.service. Dec 13 04:02:58.872738 systemd-logind[1597]: Removed session 4. Dec 13 04:02:58.905749 sshd[1682]: Accepted publickey for core from 139.178.68.195 port 49776 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:02:58.906630 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:58.909708 systemd-logind[1597]: New session 5 of user core. Dec 13 04:02:58.910385 systemd[1]: Started session-5.scope. Dec 13 04:02:58.975931 sshd[1682]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:58.981644 systemd[1]: sshd@2-147.28.180.253:22-139.178.68.195:49776.service: Deactivated successfully. Dec 13 04:02:58.983336 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 04:02:58.985020 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Dec 13 04:02:58.987401 systemd-logind[1597]: Removed session 5. Dec 13 04:02:59.017020 systemd-timesyncd[1508]: Contacted time server 50.205.57.38:123 (0.flatcar.pool.ntp.org). Dec 13 04:02:59.017194 systemd-timesyncd[1508]: Initial clock synchronization to Fri 2024-12-13 04:02:58.918268 UTC. Dec 13 04:02:59.229351 coreos-metadata[1521]: Dec 13 04:02:59.229 INFO Fetch successful Dec 13 04:02:59.263539 unknown[1521]: wrote ssh authorized keys file for user: core Dec 13 04:02:59.276834 update-ssh-keys[1687]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:02:59.277082 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 04:02:59.364121 coreos-metadata[1524]: Dec 13 04:02:59.364 INFO Fetch successful Dec 13 04:02:59.442581 systemd[1]: Finished coreos-metadata.service. Dec 13 04:02:59.443476 systemd[1]: Started packet-phone-home.service. Dec 13 04:02:59.443662 systemd[1]: Reached target multi-user.target. Dec 13 04:02:59.444333 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 04:02:59.448526 curl[1690]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 04:02:59.448684 curl[1690]: Dload Upload Total Spent Left Speed Dec 13 04:02:59.448791 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 04:02:59.448895 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 04:02:59.449069 systemd[1]: Startup finished in 1.918s (kernel) + 21.867s (initrd) + 15.737s (userspace) = 39.523s. Dec 13 04:02:59.764993 curl[1690]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 04:02:59.767389 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 04:03:02.860710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 04:03:02.861571 systemd[1]: Stopped kubelet.service. Dec 13 04:03:02.863605 systemd[1]: Starting kubelet.service... Dec 13 04:03:03.046611 systemd[1]: Started kubelet.service. Dec 13 04:03:03.081767 kubelet[1696]: E1213 04:03:03.081741 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:03:03.083722 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:03:03.083814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:03:08.918575 systemd[1]: Started sshd@3-147.28.180.253:22-139.178.68.195:59960.service. Dec 13 04:03:08.952245 sshd[1712]: Accepted publickey for core from 139.178.68.195 port 59960 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:08.952966 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:08.955278 systemd-logind[1597]: New session 6 of user core. Dec 13 04:03:08.955817 systemd[1]: Started session-6.scope. Dec 13 04:03:09.007646 sshd[1712]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:09.009413 systemd[1]: sshd@3-147.28.180.253:22-139.178.68.195:59960.service: Deactivated successfully. Dec 13 04:03:09.009766 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 04:03:09.010158 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Dec 13 04:03:09.010737 systemd[1]: Started sshd@4-147.28.180.253:22-139.178.68.195:59962.service. Dec 13 04:03:09.011192 systemd-logind[1597]: Removed session 6. Dec 13 04:03:09.044329 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 59962 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:09.045177 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:09.048072 systemd-logind[1597]: New session 7 of user core. Dec 13 04:03:09.048684 systemd[1]: Started session-7.scope. Dec 13 04:03:09.099514 sshd[1718]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:09.106178 systemd[1]: sshd@4-147.28.180.253:22-139.178.68.195:59962.service: Deactivated successfully. Dec 13 04:03:09.107864 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 04:03:09.109602 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Dec 13 04:03:09.112610 systemd[1]: Started sshd@5-147.28.180.253:22-139.178.68.195:59964.service. Dec 13 04:03:09.115252 systemd-logind[1597]: Removed session 7. Dec 13 04:03:09.170623 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 59964 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:09.171290 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:09.173572 systemd-logind[1597]: New session 8 of user core. Dec 13 04:03:09.174038 systemd[1]: Started session-8.scope. Dec 13 04:03:09.225080 sshd[1724]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:09.227832 systemd[1]: sshd@5-147.28.180.253:22-139.178.68.195:59964.service: Deactivated successfully. Dec 13 04:03:09.228541 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 04:03:09.229257 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Dec 13 04:03:09.230479 systemd[1]: Started sshd@6-147.28.180.253:22-139.178.68.195:59974.service. Dec 13 04:03:09.231454 systemd-logind[1597]: Removed session 8. Dec 13 04:03:09.310691 sshd[1730]: Accepted publickey for core from 139.178.68.195 port 59974 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:09.312977 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:09.319629 systemd-logind[1597]: New session 9 of user core. Dec 13 04:03:09.321214 systemd[1]: Started session-9.scope. Dec 13 04:03:09.419089 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 04:03:09.419781 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 04:03:10.003567 systemd[1]: Stopped kubelet.service. Dec 13 04:03:10.004784 systemd[1]: Starting kubelet.service... Dec 13 04:03:10.019332 systemd[1]: Reloading. Dec 13 04:03:10.054700 /usr/lib/systemd/system-generators/torcx-generator[1809]: time="2024-12-13T04:03:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:03:10.054719 /usr/lib/systemd/system-generators/torcx-generator[1809]: time="2024-12-13T04:03:10Z" level=info msg="torcx already run" Dec 13 04:03:10.107083 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:03:10.107095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:03:10.118595 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:03:10.175015 systemd[1]: Started kubelet.service. Dec 13 04:03:10.176891 systemd[1]: Stopping kubelet.service... Dec 13 04:03:10.177140 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:03:10.177248 systemd[1]: Stopped kubelet.service. Dec 13 04:03:10.178112 systemd[1]: Starting kubelet.service... Dec 13 04:03:10.405173 systemd[1]: Started kubelet.service. Dec 13 04:03:10.433697 kubelet[1881]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:03:10.433697 kubelet[1881]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:03:10.433697 kubelet[1881]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:03:10.434029 kubelet[1881]: I1213 04:03:10.433697 1881 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:03:10.631285 kubelet[1881]: I1213 04:03:10.631239 1881 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 04:03:10.631285 kubelet[1881]: I1213 04:03:10.631253 1881 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:03:10.631386 kubelet[1881]: I1213 04:03:10.631381 1881 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 04:03:10.646700 kubelet[1881]: I1213 04:03:10.646691 1881 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:03:10.652794 kubelet[1881]: E1213 04:03:10.652745 1881 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 04:03:10.652794 kubelet[1881]: I1213 04:03:10.652759 1881 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 04:03:10.673041 kubelet[1881]: I1213 04:03:10.672976 1881 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:03:10.674018 kubelet[1881]: I1213 04:03:10.673980 1881 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 04:03:10.674101 kubelet[1881]: I1213 04:03:10.674054 1881 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:03:10.674193 kubelet[1881]: I1213 04:03:10.674068 1881 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.67.80.35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 04:03:10.674193 kubelet[1881]: I1213 04:03:10.674160 1881 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:03:10.674193 kubelet[1881]: I1213 04:03:10.674167 1881 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 04:03:10.674305 kubelet[1881]: I1213 04:03:10.674213 1881 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:03:10.675704 kubelet[1881]: I1213 04:03:10.675668 1881 kubelet.go:408] "Attempting to sync node with API server" Dec 13 04:03:10.675704 kubelet[1881]: I1213 04:03:10.675679 1881 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:03:10.675704 kubelet[1881]: I1213 04:03:10.675695 1881 kubelet.go:314] "Adding apiserver pod source" Dec 13 04:03:10.675704 kubelet[1881]: I1213 04:03:10.675702 1881 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:03:10.675788 kubelet[1881]: E1213 04:03:10.675761 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:10.675788 kubelet[1881]: E1213 04:03:10.675775 1881 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:10.680343 kubelet[1881]: I1213 04:03:10.680291 1881 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 04:03:10.681782 kubelet[1881]: I1213 04:03:10.681746 1881 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:03:10.682384 kubelet[1881]: W1213 04:03:10.682342 1881 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 04:03:10.682694 kubelet[1881]: I1213 04:03:10.682650 1881 server.go:1269] "Started kubelet" Dec 13 04:03:10.682751 kubelet[1881]: I1213 04:03:10.682715 1881 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:03:10.682751 kubelet[1881]: I1213 04:03:10.682710 1881 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:03:10.682960 kubelet[1881]: I1213 04:03:10.682920 1881 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:03:10.692500 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 04:03:10.692554 kubelet[1881]: E1213 04:03:10.692518 1881 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:03:10.692554 kubelet[1881]: I1213 04:03:10.692531 1881 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:03:10.692637 kubelet[1881]: I1213 04:03:10.692585 1881 server.go:460] "Adding debug handlers to kubelet server" Dec 13 04:03:10.692637 kubelet[1881]: I1213 04:03:10.692593 1881 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 04:03:10.692712 kubelet[1881]: I1213 04:03:10.692659 1881 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 04:03:10.693130 kubelet[1881]: I1213 04:03:10.693117 1881 reconciler.go:26] "Reconciler: start to sync state" Dec 13 04:03:10.693226 kubelet[1881]: I1213 04:03:10.693215 1881 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 04:03:10.693432 kubelet[1881]: E1213 04:03:10.693393 1881 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.80.35\" not found" Dec 13 04:03:10.693491 kubelet[1881]: I1213 04:03:10.693428 1881 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:03:10.693581 kubelet[1881]: I1213 04:03:10.693560 1881 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:03:10.700234 kubelet[1881]: I1213 04:03:10.700219 1881 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:03:10.701498 kubelet[1881]: E1213 04:03:10.701483 1881 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.35\" not found" node="10.67.80.35" Dec 13 04:03:10.705804 kubelet[1881]: I1213 04:03:10.705795 1881 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:03:10.705804 kubelet[1881]: I1213 04:03:10.705803 1881 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:03:10.705872 kubelet[1881]: I1213 04:03:10.705814 1881 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:03:10.706648 kubelet[1881]: I1213 04:03:10.706612 1881 policy_none.go:49] "None policy: Start" Dec 13 04:03:10.706933 kubelet[1881]: I1213 04:03:10.706896 1881 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:03:10.706933 kubelet[1881]: I1213 04:03:10.706907 1881 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:03:10.709449 systemd[1]: Created slice kubepods.slice. Dec 13 04:03:10.711920 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 04:03:10.713413 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 04:03:10.727933 kubelet[1881]: I1213 04:03:10.727919 1881 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:03:10.728026 kubelet[1881]: I1213 04:03:10.728017 1881 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 04:03:10.728062 kubelet[1881]: I1213 04:03:10.728025 1881 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 04:03:10.728143 kubelet[1881]: I1213 04:03:10.728131 1881 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:03:10.728584 kubelet[1881]: E1213 04:03:10.728569 1881 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.35\" not found" Dec 13 04:03:10.788109 kubelet[1881]: I1213 04:03:10.788089 1881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:03:10.788642 kubelet[1881]: I1213 04:03:10.788631 1881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:03:10.788674 kubelet[1881]: I1213 04:03:10.788649 1881 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:03:10.788674 kubelet[1881]: I1213 04:03:10.788661 1881 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 04:03:10.788721 kubelet[1881]: E1213 04:03:10.788685 1881 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 04:03:10.830333 kubelet[1881]: I1213 04:03:10.830269 1881 kubelet_node_status.go:72] "Attempting to register node" node="10.67.80.35" Dec 13 04:03:10.840183 kubelet[1881]: I1213 04:03:10.840126 1881 kubelet_node_status.go:75] "Successfully registered node" node="10.67.80.35" Dec 13 04:03:10.855098 kubelet[1881]: I1213 04:03:10.855041 1881 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 04:03:10.855864 env[1562]: time="2024-12-13T04:03:10.855782764Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 04:03:10.856892 kubelet[1881]: I1213 04:03:10.856262 1881 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 04:03:10.948568 sudo[1733]: pam_unix(sudo:session): session closed for user root Dec 13 04:03:10.953259 sshd[1730]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:10.959251 systemd[1]: sshd@6-147.28.180.253:22-139.178.68.195:59974.service: Deactivated successfully. Dec 13 04:03:10.961258 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 04:03:10.963062 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Dec 13 04:03:10.965339 systemd-logind[1597]: Removed session 9. Dec 13 04:03:11.632855 kubelet[1881]: I1213 04:03:11.632749 1881 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 04:03:11.633692 kubelet[1881]: W1213 04:03:11.633145 1881 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:03:11.633692 kubelet[1881]: W1213 04:03:11.633145 1881 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:03:11.633692 kubelet[1881]: W1213 04:03:11.633155 1881 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:03:11.676131 kubelet[1881]: E1213 04:03:11.676026 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:11.676131 kubelet[1881]: I1213 04:03:11.676042 1881 apiserver.go:52] "Watching apiserver" Dec 13 04:03:11.694327 systemd[1]: Created slice kubepods-besteffort-pod1e6894c8_9985_4b0d_9a27_c9443c234891.slice. Dec 13 04:03:11.694549 kubelet[1881]: I1213 04:03:11.694331 1881 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 04:03:11.698384 kubelet[1881]: I1213 04:03:11.698343 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-lib-modules\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698384 kubelet[1881]: I1213 04:03:11.698359 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-config-path\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698384 kubelet[1881]: I1213 04:03:11.698369 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-net\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698470 kubelet[1881]: I1213 04:03:11.698388 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-hubble-tls\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698470 kubelet[1881]: I1213 04:03:11.698398 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-bpf-maps\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698470 kubelet[1881]: I1213 04:03:11.698406 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-hostproc\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698470 kubelet[1881]: I1213 04:03:11.698416 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7zp\" (UniqueName: \"kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-kube-api-access-5k7zp\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698470 kubelet[1881]: I1213 04:03:11.698425 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e6894c8-9985-4b0d-9a27-c9443c234891-kube-proxy\") pod \"kube-proxy-r98zz\" (UID: \"1e6894c8-9985-4b0d-9a27-c9443c234891\") " pod="kube-system/kube-proxy-r98zz" Dec 13 04:03:11.698470 kubelet[1881]: I1213 04:03:11.698435 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbs82\" (UniqueName: \"kubernetes.io/projected/1e6894c8-9985-4b0d-9a27-c9443c234891-kube-api-access-mbs82\") pod \"kube-proxy-r98zz\" (UID: \"1e6894c8-9985-4b0d-9a27-c9443c234891\") " pod="kube-system/kube-proxy-r98zz" Dec 13 04:03:11.698570 kubelet[1881]: I1213 04:03:11.698451 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-run\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698570 kubelet[1881]: I1213 04:03:11.698461 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-xtables-lock\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698570 kubelet[1881]: I1213 04:03:11.698472 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-cgroup\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698570 kubelet[1881]: I1213 04:03:11.698491 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cni-path\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698570 kubelet[1881]: I1213 04:03:11.698514 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-etc-cni-netd\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698570 kubelet[1881]: I1213 04:03:11.698525 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1e35ec7-2ef7-4916-948d-14630af205e3-clustermesh-secrets\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698666 kubelet[1881]: I1213 04:03:11.698534 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-kernel\") pod \"cilium-fftxr\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " pod="kube-system/cilium-fftxr" Dec 13 04:03:11.698666 kubelet[1881]: I1213 04:03:11.698542 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e6894c8-9985-4b0d-9a27-c9443c234891-xtables-lock\") pod \"kube-proxy-r98zz\" (UID: \"1e6894c8-9985-4b0d-9a27-c9443c234891\") " pod="kube-system/kube-proxy-r98zz" Dec 13 04:03:11.698666 kubelet[1881]: I1213 04:03:11.698550 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e6894c8-9985-4b0d-9a27-c9443c234891-lib-modules\") pod \"kube-proxy-r98zz\" (UID: \"1e6894c8-9985-4b0d-9a27-c9443c234891\") " pod="kube-system/kube-proxy-r98zz" Dec 13 04:03:11.704620 systemd[1]: Created slice kubepods-burstable-podb1e35ec7_2ef7_4916_948d_14630af205e3.slice. Dec 13 04:03:11.800756 kubelet[1881]: I1213 04:03:11.800677 1881 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 04:03:12.006516 env[1562]: time="2024-12-13T04:03:12.006306480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r98zz,Uid:1e6894c8-9985-4b0d-9a27-c9443c234891,Namespace:kube-system,Attempt:0,}" Dec 13 04:03:12.017481 env[1562]: time="2024-12-13T04:03:12.017389794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fftxr,Uid:b1e35ec7-2ef7-4916-948d-14630af205e3,Namespace:kube-system,Attempt:0,}" Dec 13 04:03:12.676657 kubelet[1881]: E1213 04:03:12.676550 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:12.751887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992411641.mount: Deactivated successfully. Dec 13 04:03:12.753207 env[1562]: time="2024-12-13T04:03:12.753189167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.754175 env[1562]: time="2024-12-13T04:03:12.754161078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.754794 env[1562]: time="2024-12-13T04:03:12.754780888Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.755530 env[1562]: time="2024-12-13T04:03:12.755490690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.756712 env[1562]: time="2024-12-13T04:03:12.756702645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.757868 env[1562]: time="2024-12-13T04:03:12.757857256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.758190 env[1562]: time="2024-12-13T04:03:12.758180560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.758610 env[1562]: time="2024-12-13T04:03:12.758599524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:12.767354 env[1562]: time="2024-12-13T04:03:12.767318494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:12.767354 env[1562]: time="2024-12-13T04:03:12.767341974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:12.767447 env[1562]: time="2024-12-13T04:03:12.767352453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:12.767447 env[1562]: time="2024-12-13T04:03:12.767425330Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925 pid=1955 runtime=io.containerd.runc.v2 Dec 13 04:03:12.767447 env[1562]: time="2024-12-13T04:03:12.767425232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:12.767502 env[1562]: time="2024-12-13T04:03:12.767447637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:12.767502 env[1562]: time="2024-12-13T04:03:12.767456063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:12.767552 env[1562]: time="2024-12-13T04:03:12.767518828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/267c4fb613206846b8e480dd7aa28edca25096918e717bb7b1e8ae6ff80ec6bf pid=1956 runtime=io.containerd.runc.v2 Dec 13 04:03:12.774714 systemd[1]: Started cri-containerd-267c4fb613206846b8e480dd7aa28edca25096918e717bb7b1e8ae6ff80ec6bf.scope. Dec 13 04:03:12.775687 systemd[1]: Started cri-containerd-6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925.scope. Dec 13 04:03:12.786560 env[1562]: time="2024-12-13T04:03:12.786526500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r98zz,Uid:1e6894c8-9985-4b0d-9a27-c9443c234891,Namespace:kube-system,Attempt:0,} returns sandbox id \"267c4fb613206846b8e480dd7aa28edca25096918e717bb7b1e8ae6ff80ec6bf\"" Dec 13 04:03:12.786735 env[1562]: time="2024-12-13T04:03:12.786687726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fftxr,Uid:b1e35ec7-2ef7-4916-948d-14630af205e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\"" Dec 13 04:03:12.787673 env[1562]: time="2024-12-13T04:03:12.787656912Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 04:03:13.677473 kubelet[1881]: E1213 04:03:13.677338 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:14.677678 kubelet[1881]: E1213 04:03:14.677640 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:15.678972 kubelet[1881]: E1213 04:03:15.678846 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:16.679063 kubelet[1881]: E1213 04:03:16.679004 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:17.376598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947647042.mount: Deactivated successfully. Dec 13 04:03:17.680158 kubelet[1881]: E1213 04:03:17.680070 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:18.681130 kubelet[1881]: E1213 04:03:18.681084 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:19.079638 env[1562]: time="2024-12-13T04:03:19.079564722Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:19.080221 env[1562]: time="2024-12-13T04:03:19.080195560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:19.081330 env[1562]: time="2024-12-13T04:03:19.081319438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:19.081582 env[1562]: time="2024-12-13T04:03:19.081571724Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 04:03:19.082193 env[1562]: time="2024-12-13T04:03:19.082180475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 04:03:19.083010 env[1562]: time="2024-12-13T04:03:19.082998542Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:03:19.088698 env[1562]: time="2024-12-13T04:03:19.088654571Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\"" Dec 13 04:03:19.089170 env[1562]: time="2024-12-13T04:03:19.089098465Z" level=info msg="StartContainer for \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\"" Dec 13 04:03:19.099553 systemd[1]: Started cri-containerd-76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60.scope. Dec 13 04:03:19.110543 env[1562]: time="2024-12-13T04:03:19.110492580Z" level=info msg="StartContainer for \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\" returns successfully" Dec 13 04:03:19.115174 systemd[1]: cri-containerd-76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60.scope: Deactivated successfully. Dec 13 04:03:19.681736 kubelet[1881]: E1213 04:03:19.681651 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:20.091322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60-rootfs.mount: Deactivated successfully. Dec 13 04:03:20.239950 env[1562]: time="2024-12-13T04:03:20.239854488Z" level=info msg="shim disconnected" id=76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60 Dec 13 04:03:20.239950 env[1562]: time="2024-12-13T04:03:20.239952954Z" level=warning msg="cleaning up after shim disconnected" id=76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60 namespace=k8s.io Dec 13 04:03:20.240843 env[1562]: time="2024-12-13T04:03:20.239984064Z" level=info msg="cleaning up dead shim" Dec 13 04:03:20.255414 env[1562]: time="2024-12-13T04:03:20.255340804Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2070 runtime=io.containerd.runc.v2\n" Dec 13 04:03:20.682312 kubelet[1881]: E1213 04:03:20.682259 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:20.749516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89990870.mount: Deactivated successfully. Dec 13 04:03:20.806765 env[1562]: time="2024-12-13T04:03:20.806740554Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:03:20.812014 env[1562]: time="2024-12-13T04:03:20.811996607Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\"" Dec 13 04:03:20.812271 env[1562]: time="2024-12-13T04:03:20.812259001Z" level=info msg="StartContainer for \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\"" Dec 13 04:03:20.819718 systemd[1]: Started cri-containerd-a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832.scope. Dec 13 04:03:20.832939 env[1562]: time="2024-12-13T04:03:20.832914061Z" level=info msg="StartContainer for \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\" returns successfully" Dec 13 04:03:20.839743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:03:20.839909 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:03:20.840048 systemd[1]: Stopping systemd-sysctl.service... Dec 13 04:03:20.841026 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:03:20.841654 systemd[1]: cri-containerd-a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832.scope: Deactivated successfully. Dec 13 04:03:20.845332 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:03:21.024156 env[1562]: time="2024-12-13T04:03:21.024037287Z" level=info msg="shim disconnected" id=a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832 Dec 13 04:03:21.024156 env[1562]: time="2024-12-13T04:03:21.024066364Z" level=warning msg="cleaning up after shim disconnected" id=a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832 namespace=k8s.io Dec 13 04:03:21.024156 env[1562]: time="2024-12-13T04:03:21.024071875Z" level=info msg="cleaning up dead shim" Dec 13 04:03:21.028210 env[1562]: time="2024-12-13T04:03:21.028168497Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2133 runtime=io.containerd.runc.v2\n" Dec 13 04:03:21.166593 env[1562]: time="2024-12-13T04:03:21.166535338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:21.167093 env[1562]: time="2024-12-13T04:03:21.167051633Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:21.167631 env[1562]: time="2024-12-13T04:03:21.167569422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:21.168461 env[1562]: time="2024-12-13T04:03:21.168417080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:21.168577 env[1562]: time="2024-12-13T04:03:21.168528134Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 04:03:21.170234 env[1562]: time="2024-12-13T04:03:21.170216559Z" level=info msg="CreateContainer within sandbox \"267c4fb613206846b8e480dd7aa28edca25096918e717bb7b1e8ae6ff80ec6bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 04:03:21.175348 env[1562]: time="2024-12-13T04:03:21.175305201Z" level=info msg="CreateContainer within sandbox \"267c4fb613206846b8e480dd7aa28edca25096918e717bb7b1e8ae6ff80ec6bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4663f7d51a320473d4670e9597498e471d4e58abcee47f8bca327af663bb4589\"" Dec 13 04:03:21.175582 env[1562]: time="2024-12-13T04:03:21.175539184Z" level=info msg="StartContainer for \"4663f7d51a320473d4670e9597498e471d4e58abcee47f8bca327af663bb4589\"" Dec 13 04:03:21.184234 systemd[1]: Started cri-containerd-4663f7d51a320473d4670e9597498e471d4e58abcee47f8bca327af663bb4589.scope. Dec 13 04:03:21.197172 env[1562]: time="2024-12-13T04:03:21.197140565Z" level=info msg="StartContainer for \"4663f7d51a320473d4670e9597498e471d4e58abcee47f8bca327af663bb4589\" returns successfully" Dec 13 04:03:21.682969 kubelet[1881]: E1213 04:03:21.682917 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:21.815195 env[1562]: time="2024-12-13T04:03:21.815100907Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:03:21.834572 env[1562]: time="2024-12-13T04:03:21.834485294Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\"" Dec 13 04:03:21.835779 env[1562]: time="2024-12-13T04:03:21.835679367Z" level=info msg="StartContainer for \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\"" Dec 13 04:03:21.841764 kubelet[1881]: I1213 04:03:21.841643 1881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r98zz" podStartSLOduration=3.459717323 podStartE2EDuration="11.841608956s" podCreationTimestamp="2024-12-13 04:03:10 +0000 UTC" firstStartedPulling="2024-12-13 04:03:12.787509889 +0000 UTC m=+2.376130001" lastFinishedPulling="2024-12-13 04:03:21.169401518 +0000 UTC m=+10.758021634" observedRunningTime="2024-12-13 04:03:21.841399783 +0000 UTC m=+11.430019963" watchObservedRunningTime="2024-12-13 04:03:21.841608956 +0000 UTC m=+11.430229140" Dec 13 04:03:21.869473 systemd[1]: Started cri-containerd-fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579.scope. Dec 13 04:03:21.910176 env[1562]: time="2024-12-13T04:03:21.910105861Z" level=info msg="StartContainer for \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\" returns successfully" Dec 13 04:03:21.915520 systemd[1]: cri-containerd-fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579.scope: Deactivated successfully. Dec 13 04:03:22.006832 env[1562]: time="2024-12-13T04:03:22.006611754Z" level=info msg="shim disconnected" id=fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579 Dec 13 04:03:22.006832 env[1562]: time="2024-12-13T04:03:22.006710932Z" level=warning msg="cleaning up after shim disconnected" id=fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579 namespace=k8s.io Dec 13 04:03:22.006832 env[1562]: time="2024-12-13T04:03:22.006741494Z" level=info msg="cleaning up dead shim" Dec 13 04:03:22.023750 env[1562]: time="2024-12-13T04:03:22.023630972Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\n" Dec 13 04:03:22.683406 kubelet[1881]: E1213 04:03:22.683277 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:22.826077 env[1562]: time="2024-12-13T04:03:22.825940413Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:03:22.844557 env[1562]: time="2024-12-13T04:03:22.844468672Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\"" Dec 13 04:03:22.844893 env[1562]: time="2024-12-13T04:03:22.844840494Z" level=info msg="StartContainer for \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\"" Dec 13 04:03:22.853684 systemd[1]: Started cri-containerd-01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e.scope. Dec 13 04:03:22.865501 env[1562]: time="2024-12-13T04:03:22.865477471Z" level=info msg="StartContainer for \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\" returns successfully" Dec 13 04:03:22.865858 systemd[1]: cri-containerd-01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e.scope: Deactivated successfully. Dec 13 04:03:22.875264 env[1562]: time="2024-12-13T04:03:22.875236970Z" level=info msg="shim disconnected" id=01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e Dec 13 04:03:22.875359 env[1562]: time="2024-12-13T04:03:22.875264906Z" level=warning msg="cleaning up after shim disconnected" id=01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e namespace=k8s.io Dec 13 04:03:22.875359 env[1562]: time="2024-12-13T04:03:22.875272540Z" level=info msg="cleaning up dead shim" Dec 13 04:03:22.878856 env[1562]: time="2024-12-13T04:03:22.878840616Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2424 runtime=io.containerd.runc.v2\n" Dec 13 04:03:23.091113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e-rootfs.mount: Deactivated successfully. Dec 13 04:03:23.683785 kubelet[1881]: E1213 04:03:23.683669 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:23.835163 env[1562]: time="2024-12-13T04:03:23.835072938Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:03:23.852363 env[1562]: time="2024-12-13T04:03:23.852320498Z" level=info msg="CreateContainer within sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\"" Dec 13 04:03:23.852660 env[1562]: time="2024-12-13T04:03:23.852617661Z" level=info msg="StartContainer for \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\"" Dec 13 04:03:23.862243 systemd[1]: Started cri-containerd-fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609.scope. Dec 13 04:03:23.889311 env[1562]: time="2024-12-13T04:03:23.889247215Z" level=info msg="StartContainer for \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\" returns successfully" Dec 13 04:03:23.928524 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 04:03:23.952246 kubelet[1881]: I1213 04:03:23.952189 1881 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 04:03:24.091486 kernel: Initializing XFRM netlink socket Dec 13 04:03:24.104503 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 04:03:24.684634 kubelet[1881]: E1213 04:03:24.684523 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:25.685437 kubelet[1881]: E1213 04:03:25.685371 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:25.710455 systemd-networkd[1323]: cilium_host: Link UP Dec 13 04:03:25.710556 systemd-networkd[1323]: cilium_net: Link UP Dec 13 04:03:25.717641 systemd-networkd[1323]: cilium_net: Gained carrier Dec 13 04:03:25.724882 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 04:03:25.724937 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 04:03:25.724988 systemd-networkd[1323]: cilium_host: Gained carrier Dec 13 04:03:25.770784 systemd-networkd[1323]: cilium_vxlan: Link UP Dec 13 04:03:25.770787 systemd-networkd[1323]: cilium_vxlan: Gained carrier Dec 13 04:03:25.914519 kernel: NET: Registered PF_ALG protocol family Dec 13 04:03:26.375762 systemd-networkd[1323]: cilium_host: Gained IPv6LL Dec 13 04:03:26.464593 systemd-networkd[1323]: lxc_health: Link UP Dec 13 04:03:26.487309 systemd-networkd[1323]: lxc_health: Gained carrier Dec 13 04:03:26.487460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:03:26.685798 kubelet[1881]: E1213 04:03:26.685727 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:26.695561 systemd-networkd[1323]: cilium_net: Gained IPv6LL Dec 13 04:03:26.720131 kubelet[1881]: I1213 04:03:26.720107 1881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fftxr" podStartSLOduration=10.42545015 podStartE2EDuration="16.720097265s" podCreationTimestamp="2024-12-13 04:03:10 +0000 UTC" firstStartedPulling="2024-12-13 04:03:12.78745549 +0000 UTC m=+2.376075604" lastFinishedPulling="2024-12-13 04:03:19.082102605 +0000 UTC m=+8.670722719" observedRunningTime="2024-12-13 04:03:24.860099236 +0000 UTC m=+14.448719515" watchObservedRunningTime="2024-12-13 04:03:26.720097265 +0000 UTC m=+16.308717380" Dec 13 04:03:26.896185 systemd[1]: Created slice kubepods-besteffort-podccd112f3_96ee_4119_bb8c_cf0663300316.slice. Dec 13 04:03:26.900500 kubelet[1881]: I1213 04:03:26.900434 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvrk\" (UniqueName: \"kubernetes.io/projected/ccd112f3-96ee-4119-bb8c-cf0663300316-kube-api-access-8wvrk\") pod \"nginx-deployment-8587fbcb89-b5w66\" (UID: \"ccd112f3-96ee-4119-bb8c-cf0663300316\") " pod="default/nginx-deployment-8587fbcb89-b5w66" Dec 13 04:03:27.198870 env[1562]: time="2024-12-13T04:03:27.198720077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-b5w66,Uid:ccd112f3-96ee-4119-bb8c-cf0663300316,Namespace:default,Attempt:0,}" Dec 13 04:03:27.230083 systemd-networkd[1323]: lxc5fdf32c65308: Link UP Dec 13 04:03:27.251448 kernel: eth0: renamed from tmpcef73 Dec 13 04:03:27.273537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:03:27.273688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5fdf32c65308: link becomes ready Dec 13 04:03:27.281750 systemd-networkd[1323]: lxc5fdf32c65308: Gained carrier Dec 13 04:03:27.399561 systemd-networkd[1323]: cilium_vxlan: Gained IPv6LL Dec 13 04:03:27.686492 kubelet[1881]: E1213 04:03:27.686447 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:27.843742 kubelet[1881]: I1213 04:03:27.843699 1881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 04:03:28.167585 systemd-networkd[1323]: lxc_health: Gained IPv6LL Dec 13 04:03:28.423534 systemd-networkd[1323]: lxc5fdf32c65308: Gained IPv6LL Dec 13 04:03:28.687462 kubelet[1881]: E1213 04:03:28.687344 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:29.559428 env[1562]: time="2024-12-13T04:03:29.559309900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:29.559428 env[1562]: time="2024-12-13T04:03:29.559393189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:29.560221 env[1562]: time="2024-12-13T04:03:29.559422398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:29.560221 env[1562]: time="2024-12-13T04:03:29.559797081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cef73e6ffe414cdbcba20d5e52df3853d441dcf4ee7fe6408d9cace4f7c1ad4d pid=3072 runtime=io.containerd.runc.v2 Dec 13 04:03:29.584656 systemd[1]: Started cri-containerd-cef73e6ffe414cdbcba20d5e52df3853d441dcf4ee7fe6408d9cace4f7c1ad4d.scope. Dec 13 04:03:29.642450 env[1562]: time="2024-12-13T04:03:29.642408403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-b5w66,Uid:ccd112f3-96ee-4119-bb8c-cf0663300316,Namespace:default,Attempt:0,} returns sandbox id \"cef73e6ffe414cdbcba20d5e52df3853d441dcf4ee7fe6408d9cace4f7c1ad4d\"" Dec 13 04:03:29.643324 env[1562]: time="2024-12-13T04:03:29.643307843Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:03:29.688056 kubelet[1881]: E1213 04:03:29.687991 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:30.675990 kubelet[1881]: E1213 04:03:30.675897 1881 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:30.689140 kubelet[1881]: E1213 04:03:30.689066 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:31.690020 kubelet[1881]: E1213 04:03:31.689968 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:32.144578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823815343.mount: Deactivated successfully. Dec 13 04:03:32.690365 kubelet[1881]: E1213 04:03:32.690343 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:32.989832 env[1562]: time="2024-12-13T04:03:32.989752937Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:32.991953 env[1562]: time="2024-12-13T04:03:32.991897793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:32.992892 env[1562]: time="2024-12-13T04:03:32.992870542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:32.994112 env[1562]: time="2024-12-13T04:03:32.994072531Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:32.994393 env[1562]: time="2024-12-13T04:03:32.994356882Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:03:32.995763 env[1562]: time="2024-12-13T04:03:32.995708705Z" level=info msg="CreateContainer within sandbox \"cef73e6ffe414cdbcba20d5e52df3853d441dcf4ee7fe6408d9cace4f7c1ad4d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 04:03:32.999948 env[1562]: time="2024-12-13T04:03:32.999906340Z" level=info msg="CreateContainer within sandbox \"cef73e6ffe414cdbcba20d5e52df3853d441dcf4ee7fe6408d9cace4f7c1ad4d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9244f1c43bdf76cee9b473b5fc16304e9d09360ddff8ec3a48b805414adc47ff\"" Dec 13 04:03:33.000245 env[1562]: time="2024-12-13T04:03:33.000196133Z" level=info msg="StartContainer for \"9244f1c43bdf76cee9b473b5fc16304e9d09360ddff8ec3a48b805414adc47ff\"" Dec 13 04:03:33.002475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360379856.mount: Deactivated successfully. Dec 13 04:03:33.010569 systemd[1]: Started cri-containerd-9244f1c43bdf76cee9b473b5fc16304e9d09360ddff8ec3a48b805414adc47ff.scope. Dec 13 04:03:33.021860 env[1562]: time="2024-12-13T04:03:33.021800185Z" level=info msg="StartContainer for \"9244f1c43bdf76cee9b473b5fc16304e9d09360ddff8ec3a48b805414adc47ff\" returns successfully" Dec 13 04:03:33.690831 kubelet[1881]: E1213 04:03:33.690711 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:33.876133 kubelet[1881]: I1213 04:03:33.875997 1881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-b5w66" podStartSLOduration=4.524019719 podStartE2EDuration="7.875962749s" podCreationTimestamp="2024-12-13 04:03:26 +0000 UTC" firstStartedPulling="2024-12-13 04:03:29.643142067 +0000 UTC m=+19.231762191" lastFinishedPulling="2024-12-13 04:03:32.995085106 +0000 UTC m=+22.583705221" observedRunningTime="2024-12-13 04:03:33.875394087 +0000 UTC m=+23.464014261" watchObservedRunningTime="2024-12-13 04:03:33.875962749 +0000 UTC m=+23.464582908" Dec 13 04:03:34.691868 kubelet[1881]: E1213 04:03:34.691789 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:35.692541 kubelet[1881]: E1213 04:03:35.692459 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:35.928627 update_engine[1558]: I1213 04:03:35.928507 1558 update_attempter.cc:509] Updating boot flags... Dec 13 04:03:36.693870 kubelet[1881]: E1213 04:03:36.693767 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:37.694285 kubelet[1881]: E1213 04:03:37.694173 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:38.370377 kubelet[1881]: I1213 04:03:38.370257 1881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 04:03:38.646116 systemd[1]: Created slice kubepods-besteffort-pod3e9896cb_f14d_4fe4_ac5a_1ac5d783b8a7.slice. Dec 13 04:03:38.684833 kubelet[1881]: I1213 04:03:38.684721 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3e9896cb-f14d-4fe4-ac5a-1ac5d783b8a7-data\") pod \"nfs-server-provisioner-0\" (UID: \"3e9896cb-f14d-4fe4-ac5a-1ac5d783b8a7\") " pod="default/nfs-server-provisioner-0" Dec 13 04:03:38.685186 kubelet[1881]: I1213 04:03:38.684848 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtmzs\" (UniqueName: \"kubernetes.io/projected/3e9896cb-f14d-4fe4-ac5a-1ac5d783b8a7-kube-api-access-vtmzs\") pod \"nfs-server-provisioner-0\" (UID: \"3e9896cb-f14d-4fe4-ac5a-1ac5d783b8a7\") " pod="default/nfs-server-provisioner-0" Dec 13 04:03:38.695014 kubelet[1881]: E1213 04:03:38.694904 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:38.951752 env[1562]: time="2024-12-13T04:03:38.951498778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3e9896cb-f14d-4fe4-ac5a-1ac5d783b8a7,Namespace:default,Attempt:0,}" Dec 13 04:03:38.974561 systemd-networkd[1323]: lxc85787278ec01: Link UP Dec 13 04:03:38.995456 kernel: eth0: renamed from tmpb908c Dec 13 04:03:39.016155 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:03:39.016227 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc85787278ec01: link becomes ready Dec 13 04:03:39.016157 systemd-networkd[1323]: lxc85787278ec01: Gained carrier Dec 13 04:03:39.161057 env[1562]: time="2024-12-13T04:03:39.160993553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:39.161057 env[1562]: time="2024-12-13T04:03:39.161016530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:39.161057 env[1562]: time="2024-12-13T04:03:39.161023536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:39.161183 env[1562]: time="2024-12-13T04:03:39.161110048Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b908c29a4fe445387abd04cdbcf65038a64653e9a116dd86c4b3f310924ece50 pid=3251 runtime=io.containerd.runc.v2 Dec 13 04:03:39.168110 systemd[1]: Started cri-containerd-b908c29a4fe445387abd04cdbcf65038a64653e9a116dd86c4b3f310924ece50.scope. Dec 13 04:03:39.188368 env[1562]: time="2024-12-13T04:03:39.188344282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3e9896cb-f14d-4fe4-ac5a-1ac5d783b8a7,Namespace:default,Attempt:0,} returns sandbox id \"b908c29a4fe445387abd04cdbcf65038a64653e9a116dd86c4b3f310924ece50\"" Dec 13 04:03:39.189081 env[1562]: time="2024-12-13T04:03:39.189069445Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 04:03:39.696208 kubelet[1881]: E1213 04:03:39.696084 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:40.647599 systemd-networkd[1323]: lxc85787278ec01: Gained IPv6LL Dec 13 04:03:40.696968 kubelet[1881]: E1213 04:03:40.696902 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:41.181096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990520078.mount: Deactivated successfully. Dec 13 04:03:41.697977 kubelet[1881]: E1213 04:03:41.697934 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:42.386999 env[1562]: time="2024-12-13T04:03:42.386968707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:42.387637 env[1562]: time="2024-12-13T04:03:42.387624148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:42.388546 env[1562]: time="2024-12-13T04:03:42.388516932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:42.389525 env[1562]: time="2024-12-13T04:03:42.389498030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:42.389978 env[1562]: time="2024-12-13T04:03:42.389945680Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 04:03:42.391360 env[1562]: time="2024-12-13T04:03:42.391319412Z" level=info msg="CreateContainer within sandbox \"b908c29a4fe445387abd04cdbcf65038a64653e9a116dd86c4b3f310924ece50\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 04:03:42.396046 env[1562]: time="2024-12-13T04:03:42.396002210Z" level=info msg="CreateContainer within sandbox \"b908c29a4fe445387abd04cdbcf65038a64653e9a116dd86c4b3f310924ece50\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e789da6a74b35672e9a540e6366628a2e2a3a2d062bc3a190026bf26a791865a\"" Dec 13 04:03:42.396352 env[1562]: time="2024-12-13T04:03:42.396274656Z" level=info msg="StartContainer for \"e789da6a74b35672e9a540e6366628a2e2a3a2d062bc3a190026bf26a791865a\"" Dec 13 04:03:42.398105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659838715.mount: Deactivated successfully. Dec 13 04:03:42.406192 systemd[1]: Started cri-containerd-e789da6a74b35672e9a540e6366628a2e2a3a2d062bc3a190026bf26a791865a.scope. Dec 13 04:03:42.418135 env[1562]: time="2024-12-13T04:03:42.418106633Z" level=info msg="StartContainer for \"e789da6a74b35672e9a540e6366628a2e2a3a2d062bc3a190026bf26a791865a\" returns successfully" Dec 13 04:03:42.698376 kubelet[1881]: E1213 04:03:42.698167 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:42.900017 kubelet[1881]: I1213 04:03:42.899867 1881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.698074533 podStartE2EDuration="4.899830233s" podCreationTimestamp="2024-12-13 04:03:38 +0000 UTC" firstStartedPulling="2024-12-13 04:03:39.188930546 +0000 UTC m=+28.777550658" lastFinishedPulling="2024-12-13 04:03:42.390686244 +0000 UTC m=+31.979306358" observedRunningTime="2024-12-13 04:03:42.899421848 +0000 UTC m=+32.488042026" watchObservedRunningTime="2024-12-13 04:03:42.899830233 +0000 UTC m=+32.488450389" Dec 13 04:03:43.698940 kubelet[1881]: E1213 04:03:43.698817 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:44.700133 kubelet[1881]: E1213 04:03:44.699989 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:45.700829 kubelet[1881]: E1213 04:03:45.700714 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:46.701384 kubelet[1881]: E1213 04:03:46.701269 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:47.701838 kubelet[1881]: E1213 04:03:47.701718 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:48.702108 kubelet[1881]: E1213 04:03:48.701988 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:49.703161 kubelet[1881]: E1213 04:03:49.703049 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:50.676760 kubelet[1881]: E1213 04:03:50.676638 1881 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:50.704347 kubelet[1881]: E1213 04:03:50.704235 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:51.705138 kubelet[1881]: E1213 04:03:51.705009 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:52.619451 systemd[1]: Created slice kubepods-besteffort-pod0aabe17c_52e0_4d88_9a95_ed280c1e0b3f.slice. Dec 13 04:03:52.705741 kubelet[1881]: E1213 04:03:52.705665 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:52.785694 kubelet[1881]: I1213 04:03:52.785605 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c56541f4-9685-4280-bd22-bed08f918e37\" (UniqueName: \"kubernetes.io/nfs/0aabe17c-52e0-4d88-9a95-ed280c1e0b3f-pvc-c56541f4-9685-4280-bd22-bed08f918e37\") pod \"test-pod-1\" (UID: \"0aabe17c-52e0-4d88-9a95-ed280c1e0b3f\") " pod="default/test-pod-1" Dec 13 04:03:52.785997 kubelet[1881]: I1213 04:03:52.785719 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbp4m\" (UniqueName: \"kubernetes.io/projected/0aabe17c-52e0-4d88-9a95-ed280c1e0b3f-kube-api-access-zbp4m\") pod \"test-pod-1\" (UID: \"0aabe17c-52e0-4d88-9a95-ed280c1e0b3f\") " pod="default/test-pod-1" Dec 13 04:03:52.924492 kernel: FS-Cache: Loaded Dec 13 04:03:52.964609 kernel: RPC: Registered named UNIX socket transport module. Dec 13 04:03:52.964659 kernel: RPC: Registered udp transport module. Dec 13 04:03:52.964682 kernel: RPC: Registered tcp transport module. Dec 13 04:03:52.969500 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 04:03:53.022447 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 04:03:53.150266 kernel: NFS: Registering the id_resolver key type Dec 13 04:03:53.150311 kernel: Key type id_resolver registered Dec 13 04:03:53.150327 kernel: Key type id_legacy registered Dec 13 04:03:53.456021 nfsidmap[3379]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-746d3338a6' Dec 13 04:03:53.489847 nfsidmap[3380]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-746d3338a6' Dec 13 04:03:53.524900 env[1562]: time="2024-12-13T04:03:53.524876174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0aabe17c-52e0-4d88-9a95-ed280c1e0b3f,Namespace:default,Attempt:0,}" Dec 13 04:03:53.539486 systemd-networkd[1323]: lxc1330c4b2da2b: Link UP Dec 13 04:03:53.560711 kernel: eth0: renamed from tmp66848 Dec 13 04:03:53.588128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:03:53.588208 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1330c4b2da2b: link becomes ready Dec 13 04:03:53.588388 systemd-networkd[1323]: lxc1330c4b2da2b: Gained carrier Dec 13 04:03:53.706720 kubelet[1881]: E1213 04:03:53.706619 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:53.708437 env[1562]: time="2024-12-13T04:03:53.708373358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:53.708437 env[1562]: time="2024-12-13T04:03:53.708399446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:53.708437 env[1562]: time="2024-12-13T04:03:53.708407984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:53.708536 env[1562]: time="2024-12-13T04:03:53.708496722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66848697d69d6ec21e78d89e1286e34f363cf081607b55d555a384ca9296ea83 pid=3439 runtime=io.containerd.runc.v2 Dec 13 04:03:53.714948 systemd[1]: Started cri-containerd-66848697d69d6ec21e78d89e1286e34f363cf081607b55d555a384ca9296ea83.scope. Dec 13 04:03:53.742201 env[1562]: time="2024-12-13T04:03:53.742142485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0aabe17c-52e0-4d88-9a95-ed280c1e0b3f,Namespace:default,Attempt:0,} returns sandbox id \"66848697d69d6ec21e78d89e1286e34f363cf081607b55d555a384ca9296ea83\"" Dec 13 04:03:53.743017 env[1562]: time="2024-12-13T04:03:53.743000313Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:03:54.116600 env[1562]: time="2024-12-13T04:03:54.116300497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:54.119304 env[1562]: time="2024-12-13T04:03:54.119198036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:54.124420 env[1562]: time="2024-12-13T04:03:54.124322813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:54.129650 env[1562]: time="2024-12-13T04:03:54.129540042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:54.132089 env[1562]: time="2024-12-13T04:03:54.131966903Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:03:54.137836 env[1562]: time="2024-12-13T04:03:54.137762178Z" level=info msg="CreateContainer within sandbox \"66848697d69d6ec21e78d89e1286e34f363cf081607b55d555a384ca9296ea83\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 04:03:54.152349 env[1562]: time="2024-12-13T04:03:54.152232832Z" level=info msg="CreateContainer within sandbox \"66848697d69d6ec21e78d89e1286e34f363cf081607b55d555a384ca9296ea83\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a8f3076f3fab5b4c8cbc22871f4882c8414d5aef258085057c08328a8c0c0b8c\"" Dec 13 04:03:54.153300 env[1562]: time="2024-12-13T04:03:54.153232559Z" level=info msg="StartContainer for \"a8f3076f3fab5b4c8cbc22871f4882c8414d5aef258085057c08328a8c0c0b8c\"" Dec 13 04:03:54.168727 systemd[1]: Started cri-containerd-a8f3076f3fab5b4c8cbc22871f4882c8414d5aef258085057c08328a8c0c0b8c.scope. Dec 13 04:03:54.180589 env[1562]: time="2024-12-13T04:03:54.180534654Z" level=info msg="StartContainer for \"a8f3076f3fab5b4c8cbc22871f4882c8414d5aef258085057c08328a8c0c0b8c\" returns successfully" Dec 13 04:03:54.707708 kubelet[1881]: E1213 04:03:54.707587 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:54.929187 kubelet[1881]: I1213 04:03:54.929152 1881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.537012683 podStartE2EDuration="16.929140237s" podCreationTimestamp="2024-12-13 04:03:38 +0000 UTC" firstStartedPulling="2024-12-13 04:03:53.742838438 +0000 UTC m=+43.331458552" lastFinishedPulling="2024-12-13 04:03:54.134965896 +0000 UTC m=+43.723586106" observedRunningTime="2024-12-13 04:03:54.929022388 +0000 UTC m=+44.517642507" watchObservedRunningTime="2024-12-13 04:03:54.929140237 +0000 UTC m=+44.517760356" Dec 13 04:03:55.048065 systemd-networkd[1323]: lxc1330c4b2da2b: Gained IPv6LL Dec 13 04:03:55.707957 kubelet[1881]: E1213 04:03:55.707839 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:56.708839 kubelet[1881]: E1213 04:03:56.708716 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:57.709158 kubelet[1881]: E1213 04:03:57.709036 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:58.709331 kubelet[1881]: E1213 04:03:58.709219 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:59.709971 kubelet[1881]: E1213 04:03:59.709861 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:00.710306 kubelet[1881]: E1213 04:04:00.710190 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:01.137921 env[1562]: time="2024-12-13T04:04:01.137843267Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:04:01.140897 env[1562]: time="2024-12-13T04:04:01.140882562Z" level=info msg="StopContainer for \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\" with timeout 2 (s)" Dec 13 04:04:01.141008 env[1562]: time="2024-12-13T04:04:01.140994316Z" level=info msg="Stop container \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\" with signal terminated" Dec 13 04:04:01.144120 systemd-networkd[1323]: lxc_health: Link DOWN Dec 13 04:04:01.144124 systemd-networkd[1323]: lxc_health: Lost carrier Dec 13 04:04:01.206832 systemd[1]: cri-containerd-fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609.scope: Deactivated successfully. Dec 13 04:04:01.207025 systemd[1]: cri-containerd-fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609.scope: Consumed 4.519s CPU time. Dec 13 04:04:01.216355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609-rootfs.mount: Deactivated successfully. Dec 13 04:04:01.711167 kubelet[1881]: E1213 04:04:01.711104 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:02.299688 env[1562]: time="2024-12-13T04:04:02.299576202Z" level=info msg="shim disconnected" id=fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609 Dec 13 04:04:02.300711 env[1562]: time="2024-12-13T04:04:02.299692305Z" level=warning msg="cleaning up after shim disconnected" id=fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609 namespace=k8s.io Dec 13 04:04:02.300711 env[1562]: time="2024-12-13T04:04:02.299734605Z" level=info msg="cleaning up dead shim" Dec 13 04:04:02.316973 env[1562]: time="2024-12-13T04:04:02.316868370Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3582 runtime=io.containerd.runc.v2\n" Dec 13 04:04:02.320119 env[1562]: time="2024-12-13T04:04:02.320003320Z" level=info msg="StopContainer for \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\" returns successfully" Dec 13 04:04:02.321336 env[1562]: time="2024-12-13T04:04:02.321230642Z" level=info msg="StopPodSandbox for \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\"" Dec 13 04:04:02.321579 env[1562]: time="2024-12-13T04:04:02.321367130Z" level=info msg="Container to stop \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:02.321579 env[1562]: time="2024-12-13T04:04:02.321411105Z" level=info msg="Container to stop \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:02.321579 env[1562]: time="2024-12-13T04:04:02.321464128Z" level=info msg="Container to stop \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:02.321579 env[1562]: time="2024-12-13T04:04:02.321503110Z" level=info msg="Container to stop \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:02.321579 env[1562]: time="2024-12-13T04:04:02.321535021Z" level=info msg="Container to stop \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:02.327689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925-shm.mount: Deactivated successfully. Dec 13 04:04:02.328972 systemd[1]: cri-containerd-6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925.scope: Deactivated successfully. Dec 13 04:04:02.352240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925-rootfs.mount: Deactivated successfully. Dec 13 04:04:02.366825 env[1562]: time="2024-12-13T04:04:02.366696379Z" level=info msg="shim disconnected" id=6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925 Dec 13 04:04:02.366825 env[1562]: time="2024-12-13T04:04:02.366819975Z" level=warning msg="cleaning up after shim disconnected" id=6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925 namespace=k8s.io Dec 13 04:04:02.367226 env[1562]: time="2024-12-13T04:04:02.366852035Z" level=info msg="cleaning up dead shim" Dec 13 04:04:02.383859 env[1562]: time="2024-12-13T04:04:02.383750164Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3612 runtime=io.containerd.runc.v2\n" Dec 13 04:04:02.384545 env[1562]: time="2024-12-13T04:04:02.384422784Z" level=info msg="TearDown network for sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" successfully" Dec 13 04:04:02.384545 env[1562]: time="2024-12-13T04:04:02.384520108Z" level=info msg="StopPodSandbox for \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" returns successfully" Dec 13 04:04:02.562277 kubelet[1881]: I1213 04:04:02.562068 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k7zp\" (UniqueName: \"kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-kube-api-access-5k7zp\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.562277 kubelet[1881]: I1213 04:04:02.562156 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-bpf-maps\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.562277 kubelet[1881]: I1213 04:04:02.562215 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cni-path\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.562277 kubelet[1881]: I1213 04:04:02.562263 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-cgroup\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.563109 kubelet[1881]: I1213 04:04:02.562308 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-etc-cni-netd\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.563109 kubelet[1881]: I1213 04:04:02.562365 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1e35ec7-2ef7-4916-948d-14630af205e3-clustermesh-secrets\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.563109 kubelet[1881]: I1213 04:04:02.562377 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.563109 kubelet[1881]: I1213 04:04:02.562420 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cni-path" (OuterVolumeSpecName: "cni-path") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.563109 kubelet[1881]: I1213 04:04:02.562424 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-lib-modules\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.563883 kubelet[1881]: I1213 04:04:02.562538 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.563883 kubelet[1881]: I1213 04:04:02.562527 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.563883 kubelet[1881]: I1213 04:04:02.562600 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-hubble-tls\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.563883 kubelet[1881]: I1213 04:04:02.562581 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.563883 kubelet[1881]: I1213 04:04:02.562654 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-run\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.564704 kubelet[1881]: I1213 04:04:02.562713 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-config-path\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.564704 kubelet[1881]: I1213 04:04:02.562713 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.564704 kubelet[1881]: I1213 04:04:02.562760 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-hostproc\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.564704 kubelet[1881]: I1213 04:04:02.562832 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-net\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.564704 kubelet[1881]: I1213 04:04:02.562905 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-xtables-lock\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.564704 kubelet[1881]: I1213 04:04:02.562900 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-hostproc" (OuterVolumeSpecName: "hostproc") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.565612 kubelet[1881]: I1213 04:04:02.562956 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-kernel\") pod \"b1e35ec7-2ef7-4916-948d-14630af205e3\" (UID: \"b1e35ec7-2ef7-4916-948d-14630af205e3\") " Dec 13 04:04:02.565612 kubelet[1881]: I1213 04:04:02.562997 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.565612 kubelet[1881]: I1213 04:04:02.563021 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.565612 kubelet[1881]: I1213 04:04:02.563042 1881 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-bpf-maps\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.565612 kubelet[1881]: I1213 04:04:02.563098 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:02.565612 kubelet[1881]: I1213 04:04:02.563128 1881 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cni-path\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.566307 kubelet[1881]: I1213 04:04:02.563182 1881 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-cgroup\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.566307 kubelet[1881]: I1213 04:04:02.563225 1881 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-etc-cni-netd\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.566307 kubelet[1881]: I1213 04:04:02.563269 1881 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-lib-modules\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.566307 kubelet[1881]: I1213 04:04:02.563314 1881 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-run\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.566307 kubelet[1881]: I1213 04:04:02.563357 1881 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-hostproc\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.567490 kubelet[1881]: I1213 04:04:02.567428 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:04:02.567546 kubelet[1881]: I1213 04:04:02.567527 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-kube-api-access-5k7zp" (OuterVolumeSpecName: "kube-api-access-5k7zp") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "kube-api-access-5k7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:02.567570 kubelet[1881]: I1213 04:04:02.567541 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:02.567570 kubelet[1881]: I1213 04:04:02.567546 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1e35ec7-2ef7-4916-948d-14630af205e3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b1e35ec7-2ef7-4916-948d-14630af205e3" (UID: "b1e35ec7-2ef7-4916-948d-14630af205e3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:02.568416 systemd[1]: var-lib-kubelet-pods-b1e35ec7\x2d2ef7\x2d4916\x2d948d\x2d14630af205e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5k7zp.mount: Deactivated successfully. Dec 13 04:04:02.568523 systemd[1]: var-lib-kubelet-pods-b1e35ec7\x2d2ef7\x2d4916\x2d948d\x2d14630af205e3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:04:02.568586 systemd[1]: var-lib-kubelet-pods-b1e35ec7\x2d2ef7\x2d4916\x2d948d\x2d14630af205e3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:04:02.663747 kubelet[1881]: I1213 04:04:02.663666 1881 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5k7zp\" (UniqueName: \"kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-kube-api-access-5k7zp\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.663747 kubelet[1881]: I1213 04:04:02.663738 1881 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1e35ec7-2ef7-4916-948d-14630af205e3-clustermesh-secrets\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.663747 kubelet[1881]: I1213 04:04:02.663769 1881 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1e35ec7-2ef7-4916-948d-14630af205e3-hubble-tls\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.664309 kubelet[1881]: I1213 04:04:02.663796 1881 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1e35ec7-2ef7-4916-948d-14630af205e3-cilium-config-path\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.664309 kubelet[1881]: I1213 04:04:02.663823 1881 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-net\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.664309 kubelet[1881]: I1213 04:04:02.663849 1881 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-xtables-lock\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.664309 kubelet[1881]: I1213 04:04:02.663873 1881 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1e35ec7-2ef7-4916-948d-14630af205e3-host-proc-sys-kernel\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:02.712354 kubelet[1881]: E1213 04:04:02.712277 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:02.801584 systemd[1]: Removed slice kubepods-burstable-podb1e35ec7_2ef7_4916_948d_14630af205e3.slice. Dec 13 04:04:02.801684 systemd[1]: kubepods-burstable-podb1e35ec7_2ef7_4916_948d_14630af205e3.slice: Consumed 4.591s CPU time. Dec 13 04:04:02.943366 kubelet[1881]: I1213 04:04:02.943307 1881 scope.go:117] "RemoveContainer" containerID="fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609" Dec 13 04:04:02.946169 env[1562]: time="2024-12-13T04:04:02.946085026Z" level=info msg="RemoveContainer for \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\"" Dec 13 04:04:02.950324 env[1562]: time="2024-12-13T04:04:02.950253653Z" level=info msg="RemoveContainer for \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\" returns successfully" Dec 13 04:04:02.950766 kubelet[1881]: I1213 04:04:02.950721 1881 scope.go:117] "RemoveContainer" containerID="01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e" Dec 13 04:04:02.953244 env[1562]: time="2024-12-13T04:04:02.953169187Z" level=info msg="RemoveContainer for \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\"" Dec 13 04:04:02.957158 env[1562]: time="2024-12-13T04:04:02.957089128Z" level=info msg="RemoveContainer for \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\" returns successfully" Dec 13 04:04:02.957497 kubelet[1881]: I1213 04:04:02.957435 1881 scope.go:117] "RemoveContainer" containerID="fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579" Dec 13 04:04:02.959914 env[1562]: time="2024-12-13T04:04:02.959852009Z" level=info msg="RemoveContainer for \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\"" Dec 13 04:04:02.965733 env[1562]: time="2024-12-13T04:04:02.965717983Z" level=info msg="RemoveContainer for \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\" returns successfully" Dec 13 04:04:02.965892 kubelet[1881]: I1213 04:04:02.965884 1881 scope.go:117] "RemoveContainer" containerID="a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832" Dec 13 04:04:02.966501 env[1562]: time="2024-12-13T04:04:02.966489141Z" level=info msg="RemoveContainer for \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\"" Dec 13 04:04:02.967417 env[1562]: time="2024-12-13T04:04:02.967376415Z" level=info msg="RemoveContainer for \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\" returns successfully" Dec 13 04:04:02.967464 kubelet[1881]: I1213 04:04:02.967432 1881 scope.go:117] "RemoveContainer" containerID="76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60" Dec 13 04:04:02.967889 env[1562]: time="2024-12-13T04:04:02.967876968Z" level=info msg="RemoveContainer for \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\"" Dec 13 04:04:02.968761 env[1562]: time="2024-12-13T04:04:02.968750355Z" level=info msg="RemoveContainer for \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\" returns successfully" Dec 13 04:04:02.968800 kubelet[1881]: I1213 04:04:02.968793 1881 scope.go:117] "RemoveContainer" containerID="fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609" Dec 13 04:04:02.968936 env[1562]: time="2024-12-13T04:04:02.968897024Z" level=error msg="ContainerStatus for \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\": not found" Dec 13 04:04:02.968987 kubelet[1881]: E1213 04:04:02.968977 1881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\": not found" containerID="fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609" Dec 13 04:04:02.969027 kubelet[1881]: I1213 04:04:02.968991 1881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609"} err="failed to get container status \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe99d87caab78b65afc1a7fbba95bb40dd0f535bb579b61f058191204dd51609\": not found" Dec 13 04:04:02.969055 kubelet[1881]: I1213 04:04:02.969028 1881 scope.go:117] "RemoveContainer" containerID="01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e" Dec 13 04:04:02.969120 env[1562]: time="2024-12-13T04:04:02.969094013Z" level=error msg="ContainerStatus for \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\": not found" Dec 13 04:04:02.969166 kubelet[1881]: E1213 04:04:02.969159 1881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\": not found" containerID="01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e" Dec 13 04:04:02.969189 kubelet[1881]: I1213 04:04:02.969169 1881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e"} err="failed to get container status \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\": rpc error: code = NotFound desc = an error occurred when try to find container \"01cf5f53e3d9f5c88a371158342b3cacc44d754dcd168c54bbf834d7c9cdf48e\": not found" Dec 13 04:04:02.969189 kubelet[1881]: I1213 04:04:02.969177 1881 scope.go:117] "RemoveContainer" containerID="fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579" Dec 13 04:04:02.969282 env[1562]: time="2024-12-13T04:04:02.969257512Z" level=error msg="ContainerStatus for \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\": not found" Dec 13 04:04:02.969324 kubelet[1881]: E1213 04:04:02.969316 1881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\": not found" containerID="fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579" Dec 13 04:04:02.969346 kubelet[1881]: I1213 04:04:02.969326 1881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579"} err="failed to get container status \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd40a0c0c293eea092177bc1a8f6ab98e24b296220cbb0ddc1b717ad17a95579\": not found" Dec 13 04:04:02.969346 kubelet[1881]: I1213 04:04:02.969335 1881 scope.go:117] "RemoveContainer" containerID="a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832" Dec 13 04:04:02.969426 env[1562]: time="2024-12-13T04:04:02.969405047Z" level=error msg="ContainerStatus for \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\": not found" Dec 13 04:04:02.969469 kubelet[1881]: E1213 04:04:02.969462 1881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\": not found" containerID="a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832" Dec 13 04:04:02.969497 kubelet[1881]: I1213 04:04:02.969470 1881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832"} err="failed to get container status \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\": rpc error: code = NotFound desc = an error occurred when try to find container \"a427c4293c43b20ead3a931ed5b670fabca09e733cb488305a560f29b95e9832\": not found" Dec 13 04:04:02.969497 kubelet[1881]: I1213 04:04:02.969478 1881 scope.go:117] "RemoveContainer" containerID="76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60" Dec 13 04:04:02.969553 env[1562]: time="2024-12-13T04:04:02.969533215Z" level=error msg="ContainerStatus for \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\": not found" Dec 13 04:04:02.969592 kubelet[1881]: E1213 04:04:02.969584 1881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\": not found" containerID="76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60" Dec 13 04:04:02.969617 kubelet[1881]: I1213 04:04:02.969595 1881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60"} err="failed to get container status \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\": rpc error: code = NotFound desc = an error occurred when try to find container \"76471fdfbf12d7efd9dc3d57b2a4bd9cfb43fb2470b0d7443998d216f6690d60\": not found" Dec 13 04:04:03.712683 kubelet[1881]: E1213 04:04:03.712565 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:03.752934 kubelet[1881]: E1213 04:04:03.752870 1881 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1e35ec7-2ef7-4916-948d-14630af205e3" containerName="cilium-agent" Dec 13 04:04:03.752934 kubelet[1881]: E1213 04:04:03.752923 1881 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1e35ec7-2ef7-4916-948d-14630af205e3" containerName="apply-sysctl-overwrites" Dec 13 04:04:03.752934 kubelet[1881]: E1213 04:04:03.752948 1881 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1e35ec7-2ef7-4916-948d-14630af205e3" containerName="clean-cilium-state" Dec 13 04:04:03.753532 kubelet[1881]: E1213 04:04:03.752965 1881 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1e35ec7-2ef7-4916-948d-14630af205e3" containerName="mount-bpf-fs" Dec 13 04:04:03.753532 kubelet[1881]: E1213 04:04:03.752982 1881 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1e35ec7-2ef7-4916-948d-14630af205e3" containerName="mount-cgroup" Dec 13 04:04:03.753532 kubelet[1881]: I1213 04:04:03.753032 1881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e35ec7-2ef7-4916-948d-14630af205e3" containerName="cilium-agent" Dec 13 04:04:03.769010 systemd[1]: Created slice kubepods-besteffort-podcbe86116_3924_4b49_943c_489e1311127b.slice. Dec 13 04:04:03.779948 systemd[1]: Created slice kubepods-burstable-podbebd20ce_3e7f_4e33_92df_8518aed953d7.slice. Dec 13 04:04:03.870746 kubelet[1881]: I1213 04:04:03.870617 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbe86116-3924-4b49-943c-489e1311127b-cilium-config-path\") pod \"cilium-operator-5d85765b45-q6snw\" (UID: \"cbe86116-3924-4b49-943c-489e1311127b\") " pod="kube-system/cilium-operator-5d85765b45-q6snw" Dec 13 04:04:03.870746 kubelet[1881]: I1213 04:04:03.870718 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cni-path\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871197 kubelet[1881]: I1213 04:04:03.870783 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-config-path\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871197 kubelet[1881]: I1213 04:04:03.870832 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-hubble-tls\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871197 kubelet[1881]: I1213 04:04:03.870882 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxpsq\" (UniqueName: \"kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-kube-api-access-vxpsq\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871197 kubelet[1881]: I1213 04:04:03.870937 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-lib-modules\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871197 kubelet[1881]: I1213 04:04:03.870983 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-clustermesh-secrets\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871784 kubelet[1881]: I1213 04:04:03.871029 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-net\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871784 kubelet[1881]: I1213 04:04:03.871073 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-kernel\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871784 kubelet[1881]: I1213 04:04:03.871120 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-bpf-maps\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871784 kubelet[1881]: I1213 04:04:03.871166 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-cgroup\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.871784 kubelet[1881]: I1213 04:04:03.871210 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-ipsec-secrets\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.872314 kubelet[1881]: I1213 04:04:03.871360 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g87zb\" (UniqueName: \"kubernetes.io/projected/cbe86116-3924-4b49-943c-489e1311127b-kube-api-access-g87zb\") pod \"cilium-operator-5d85765b45-q6snw\" (UID: \"cbe86116-3924-4b49-943c-489e1311127b\") " pod="kube-system/cilium-operator-5d85765b45-q6snw" Dec 13 04:04:03.872314 kubelet[1881]: I1213 04:04:03.871480 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-run\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.872314 kubelet[1881]: I1213 04:04:03.871556 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-hostproc\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.872314 kubelet[1881]: I1213 04:04:03.871673 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-etc-cni-netd\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.872314 kubelet[1881]: I1213 04:04:03.871755 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-xtables-lock\") pod \"cilium-b6jjj\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " pod="kube-system/cilium-b6jjj" Dec 13 04:04:03.910368 kubelet[1881]: E1213 04:04:03.910229 1881 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-vxpsq lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-b6jjj" podUID="bebd20ce-3e7f-4e33-92df-8518aed953d7" Dec 13 04:04:04.073889 kubelet[1881]: I1213 04:04:04.073651 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-lib-modules\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.073889 kubelet[1881]: I1213 04:04:04.073814 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-bpf-maps\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.073889 kubelet[1881]: I1213 04:04:04.073812 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.073889 kubelet[1881]: I1213 04:04:04.073884 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-ipsec-secrets\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.074545 kubelet[1881]: I1213 04:04:04.073934 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-run\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.074545 kubelet[1881]: I1213 04:04:04.073952 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.074545 kubelet[1881]: I1213 04:04:04.073982 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-hostproc\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.074545 kubelet[1881]: I1213 04:04:04.074045 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxpsq\" (UniqueName: \"kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-kube-api-access-vxpsq\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.074545 kubelet[1881]: I1213 04:04:04.074041 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.074545 kubelet[1881]: I1213 04:04:04.074099 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-hubble-tls\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.075393 kubelet[1881]: I1213 04:04:04.074120 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-hostproc" (OuterVolumeSpecName: "hostproc") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.075393 kubelet[1881]: I1213 04:04:04.074149 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-clustermesh-secrets\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.075393 kubelet[1881]: I1213 04:04:04.074195 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-kernel\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.075393 kubelet[1881]: I1213 04:04:04.074240 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-etc-cni-netd\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.075393 kubelet[1881]: I1213 04:04:04.074284 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-xtables-lock\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.076087 kubelet[1881]: I1213 04:04:04.074284 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.076087 kubelet[1881]: I1213 04:04:04.074328 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cni-path\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.076087 kubelet[1881]: I1213 04:04:04.074376 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-net\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.076087 kubelet[1881]: I1213 04:04:04.074372 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.076087 kubelet[1881]: I1213 04:04:04.074425 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-cgroup\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.076650 env[1562]: time="2024-12-13T04:04:04.075734880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-q6snw,Uid:cbe86116-3924-4b49-943c-489e1311127b,Namespace:kube-system,Attempt:0,}" Dec 13 04:04:04.077335 kubelet[1881]: I1213 04:04:04.074403 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.077335 kubelet[1881]: I1213 04:04:04.074483 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.077335 kubelet[1881]: I1213 04:04:04.074523 1881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-config-path\") pod \"bebd20ce-3e7f-4e33-92df-8518aed953d7\" (UID: \"bebd20ce-3e7f-4e33-92df-8518aed953d7\") " Dec 13 04:04:04.077335 kubelet[1881]: I1213 04:04:04.074508 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cni-path" (OuterVolumeSpecName: "cni-path") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.077335 kubelet[1881]: I1213 04:04:04.074595 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.074761 1881 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-kernel\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.074833 1881 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-etc-cni-netd\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.074871 1881 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-xtables-lock\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.074900 1881 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cni-path\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.074943 1881 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-host-proc-sys-net\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.074968 1881 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-cgroup\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.074991 1881 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-lib-modules\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.077984 kubelet[1881]: I1213 04:04:04.075015 1881 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-bpf-maps\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.078147 kubelet[1881]: I1213 04:04:04.075037 1881 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-run\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.078147 kubelet[1881]: I1213 04:04:04.075059 1881 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bebd20ce-3e7f-4e33-92df-8518aed953d7-hostproc\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.078825 kubelet[1881]: I1213 04:04:04.078780 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:04:04.078889 kubelet[1881]: I1213 04:04:04.078816 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-kube-api-access-vxpsq" (OuterVolumeSpecName: "kube-api-access-vxpsq") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "kube-api-access-vxpsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:04.078915 kubelet[1881]: I1213 04:04:04.078884 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:04.079047 kubelet[1881]: I1213 04:04:04.079034 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:04.079083 kubelet[1881]: I1213 04:04:04.079048 1881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bebd20ce-3e7f-4e33-92df-8518aed953d7" (UID: "bebd20ce-3e7f-4e33-92df-8518aed953d7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:04.079777 systemd[1]: var-lib-kubelet-pods-bebd20ce\x2d3e7f\x2d4e33\x2d92df\x2d8518aed953d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxpsq.mount: Deactivated successfully. Dec 13 04:04:04.079835 systemd[1]: var-lib-kubelet-pods-bebd20ce\x2d3e7f\x2d4e33\x2d92df\x2d8518aed953d7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:04:04.079870 systemd[1]: var-lib-kubelet-pods-bebd20ce\x2d3e7f\x2d4e33\x2d92df\x2d8518aed953d7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 04:04:04.079903 systemd[1]: var-lib-kubelet-pods-bebd20ce\x2d3e7f\x2d4e33\x2d92df\x2d8518aed953d7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:04:04.082983 env[1562]: time="2024-12-13T04:04:04.082950977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:04:04.082983 env[1562]: time="2024-12-13T04:04:04.082974549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:04:04.083067 env[1562]: time="2024-12-13T04:04:04.082986626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:04:04.083111 env[1562]: time="2024-12-13T04:04:04.083092467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b636e79d84572a91efa825c4f9f7a920afb74a37bc38ef17e797a7d40f895b6 pid=3644 runtime=io.containerd.runc.v2 Dec 13 04:04:04.089293 systemd[1]: Started cri-containerd-3b636e79d84572a91efa825c4f9f7a920afb74a37bc38ef17e797a7d40f895b6.scope. Dec 13 04:04:04.114448 env[1562]: time="2024-12-13T04:04:04.114419329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-q6snw,Uid:cbe86116-3924-4b49-943c-489e1311127b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b636e79d84572a91efa825c4f9f7a920afb74a37bc38ef17e797a7d40f895b6\"" Dec 13 04:04:04.115162 env[1562]: time="2024-12-13T04:04:04.115146263Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 04:04:04.175481 kubelet[1881]: I1213 04:04:04.175333 1881 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-config-path\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.175481 kubelet[1881]: I1213 04:04:04.175404 1881 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vxpsq\" (UniqueName: \"kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-kube-api-access-vxpsq\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.175481 kubelet[1881]: I1213 04:04:04.175478 1881 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-cilium-ipsec-secrets\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.175986 kubelet[1881]: I1213 04:04:04.175517 1881 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bebd20ce-3e7f-4e33-92df-8518aed953d7-hubble-tls\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.175986 kubelet[1881]: I1213 04:04:04.175543 1881 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bebd20ce-3e7f-4e33-92df-8518aed953d7-clustermesh-secrets\") on node \"10.67.80.35\" DevicePath \"\"" Dec 13 04:04:04.712902 kubelet[1881]: E1213 04:04:04.712774 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:04.795989 kubelet[1881]: I1213 04:04:04.795907 1881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e35ec7-2ef7-4916-948d-14630af205e3" path="/var/lib/kubelet/pods/b1e35ec7-2ef7-4916-948d-14630af205e3/volumes" Dec 13 04:04:04.801733 systemd[1]: Removed slice kubepods-burstable-podbebd20ce_3e7f_4e33_92df_8518aed953d7.slice. Dec 13 04:04:05.018765 systemd[1]: Created slice kubepods-burstable-podb4837016_ab84_4212_80d3_967174e34f7f.slice. Dec 13 04:04:05.182552 kubelet[1881]: I1213 04:04:05.182504 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-bpf-maps\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182552 kubelet[1881]: I1213 04:04:05.182524 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-xtables-lock\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182552 kubelet[1881]: I1213 04:04:05.182536 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4837016-ab84-4212-80d3-967174e34f7f-clustermesh-secrets\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182552 kubelet[1881]: I1213 04:04:05.182545 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-cilium-cgroup\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182552 kubelet[1881]: I1213 04:04:05.182553 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-etc-cni-netd\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182739 kubelet[1881]: I1213 04:04:05.182561 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-lib-modules\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182739 kubelet[1881]: I1213 04:04:05.182569 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4837016-ab84-4212-80d3-967174e34f7f-cilium-config-path\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182739 kubelet[1881]: I1213 04:04:05.182592 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-host-proc-sys-net\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182739 kubelet[1881]: I1213 04:04:05.182610 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-host-proc-sys-kernel\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182739 kubelet[1881]: I1213 04:04:05.182620 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-cilium-run\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182739 kubelet[1881]: I1213 04:04:05.182630 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-hostproc\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182840 kubelet[1881]: I1213 04:04:05.182639 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4837016-ab84-4212-80d3-967174e34f7f-cni-path\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182840 kubelet[1881]: I1213 04:04:05.182650 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4837016-ab84-4212-80d3-967174e34f7f-cilium-ipsec-secrets\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182840 kubelet[1881]: I1213 04:04:05.182660 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4837016-ab84-4212-80d3-967174e34f7f-hubble-tls\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.182840 kubelet[1881]: I1213 04:04:05.182669 1881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckw5r\" (UniqueName: \"kubernetes.io/projected/b4837016-ab84-4212-80d3-967174e34f7f-kube-api-access-ckw5r\") pod \"cilium-ww4mq\" (UID: \"b4837016-ab84-4212-80d3-967174e34f7f\") " pod="kube-system/cilium-ww4mq" Dec 13 04:04:05.338037 env[1562]: time="2024-12-13T04:04:05.337905145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ww4mq,Uid:b4837016-ab84-4212-80d3-967174e34f7f,Namespace:kube-system,Attempt:0,}" Dec 13 04:04:05.348462 env[1562]: time="2024-12-13T04:04:05.348340352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:04:05.348462 env[1562]: time="2024-12-13T04:04:05.348390816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:04:05.348462 env[1562]: time="2024-12-13T04:04:05.348409752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:04:05.348764 env[1562]: time="2024-12-13T04:04:05.348677967Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8 pid=3687 runtime=io.containerd.runc.v2 Dec 13 04:04:05.365235 systemd[1]: Started cri-containerd-dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8.scope. Dec 13 04:04:05.395286 env[1562]: time="2024-12-13T04:04:05.395258046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ww4mq,Uid:b4837016-ab84-4212-80d3-967174e34f7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\"" Dec 13 04:04:05.396831 env[1562]: time="2024-12-13T04:04:05.396792114Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:04:05.401406 env[1562]: time="2024-12-13T04:04:05.401388814Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d50bea174f539b791d93be1e63477ecda92ed25acb8193a65484faf0619518e6\"" Dec 13 04:04:05.401745 env[1562]: time="2024-12-13T04:04:05.401706675Z" level=info msg="StartContainer for \"d50bea174f539b791d93be1e63477ecda92ed25acb8193a65484faf0619518e6\"" Dec 13 04:04:05.409088 systemd[1]: Started cri-containerd-d50bea174f539b791d93be1e63477ecda92ed25acb8193a65484faf0619518e6.scope. Dec 13 04:04:05.421042 env[1562]: time="2024-12-13T04:04:05.421016513Z" level=info msg="StartContainer for \"d50bea174f539b791d93be1e63477ecda92ed25acb8193a65484faf0619518e6\" returns successfully" Dec 13 04:04:05.425879 systemd[1]: cri-containerd-d50bea174f539b791d93be1e63477ecda92ed25acb8193a65484faf0619518e6.scope: Deactivated successfully. Dec 13 04:04:05.467151 env[1562]: time="2024-12-13T04:04:05.467084874Z" level=info msg="shim disconnected" id=d50bea174f539b791d93be1e63477ecda92ed25acb8193a65484faf0619518e6 Dec 13 04:04:05.467151 env[1562]: time="2024-12-13T04:04:05.467122602Z" level=warning msg="cleaning up after shim disconnected" id=d50bea174f539b791d93be1e63477ecda92ed25acb8193a65484faf0619518e6 namespace=k8s.io Dec 13 04:04:05.467151 env[1562]: time="2024-12-13T04:04:05.467131524Z" level=info msg="cleaning up dead shim" Dec 13 04:04:05.472378 env[1562]: time="2024-12-13T04:04:05.472356550Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3768 runtime=io.containerd.runc.v2\n" Dec 13 04:04:05.714027 kubelet[1881]: E1213 04:04:05.713948 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:05.742827 kubelet[1881]: E1213 04:04:05.742696 1881 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:04:05.964425 env[1562]: time="2024-12-13T04:04:05.964170619Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:04:05.978612 env[1562]: time="2024-12-13T04:04:05.978486505Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2\"" Dec 13 04:04:05.979805 env[1562]: time="2024-12-13T04:04:05.979693146Z" level=info msg="StartContainer for \"c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2\"" Dec 13 04:04:05.992867 systemd[1]: Started cri-containerd-c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2.scope. Dec 13 04:04:06.005906 env[1562]: time="2024-12-13T04:04:06.005880447Z" level=info msg="StartContainer for \"c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2\" returns successfully" Dec 13 04:04:06.009306 systemd[1]: cri-containerd-c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2.scope: Deactivated successfully. Dec 13 04:04:06.019147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2-rootfs.mount: Deactivated successfully. Dec 13 04:04:06.035455 env[1562]: time="2024-12-13T04:04:06.035406290Z" level=info msg="shim disconnected" id=c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2 Dec 13 04:04:06.035582 env[1562]: time="2024-12-13T04:04:06.035460991Z" level=warning msg="cleaning up after shim disconnected" id=c0065136af61ca19932bcfbb45d6313c526b663bd2affd3c50084000a1a3b1b2 namespace=k8s.io Dec 13 04:04:06.035582 env[1562]: time="2024-12-13T04:04:06.035472937Z" level=info msg="cleaning up dead shim" Dec 13 04:04:06.042042 env[1562]: time="2024-12-13T04:04:06.041981291Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Dec 13 04:04:06.492063 env[1562]: time="2024-12-13T04:04:06.492014856Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:04:06.492602 env[1562]: time="2024-12-13T04:04:06.492588951Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:04:06.493269 env[1562]: time="2024-12-13T04:04:06.493235717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:04:06.493611 env[1562]: time="2024-12-13T04:04:06.493569574Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 04:04:06.494799 env[1562]: time="2024-12-13T04:04:06.494766452Z" level=info msg="CreateContainer within sandbox \"3b636e79d84572a91efa825c4f9f7a920afb74a37bc38ef17e797a7d40f895b6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 04:04:06.499355 env[1562]: time="2024-12-13T04:04:06.499319028Z" level=info msg="CreateContainer within sandbox \"3b636e79d84572a91efa825c4f9f7a920afb74a37bc38ef17e797a7d40f895b6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3346dc704646a613c27cbe66d631db8a049f9a5b785365c58b7dbf0ab6f1f39b\"" Dec 13 04:04:06.499693 env[1562]: time="2024-12-13T04:04:06.499646977Z" level=info msg="StartContainer for \"3346dc704646a613c27cbe66d631db8a049f9a5b785365c58b7dbf0ab6f1f39b\"" Dec 13 04:04:06.507993 systemd[1]: Started cri-containerd-3346dc704646a613c27cbe66d631db8a049f9a5b785365c58b7dbf0ab6f1f39b.scope. Dec 13 04:04:06.520429 env[1562]: time="2024-12-13T04:04:06.520404616Z" level=info msg="StartContainer for \"3346dc704646a613c27cbe66d631db8a049f9a5b785365c58b7dbf0ab6f1f39b\" returns successfully" Dec 13 04:04:06.714742 kubelet[1881]: E1213 04:04:06.714673 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:06.795771 kubelet[1881]: I1213 04:04:06.795547 1881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebd20ce-3e7f-4e33-92df-8518aed953d7" path="/var/lib/kubelet/pods/bebd20ce-3e7f-4e33-92df-8518aed953d7/volumes" Dec 13 04:04:06.974109 env[1562]: time="2024-12-13T04:04:06.973973511Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:04:06.985062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074817095.mount: Deactivated successfully. Dec 13 04:04:06.987437 env[1562]: time="2024-12-13T04:04:06.987420986Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b\"" Dec 13 04:04:06.987823 env[1562]: time="2024-12-13T04:04:06.987751780Z" level=info msg="StartContainer for \"3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b\"" Dec 13 04:04:06.991689 kubelet[1881]: I1213 04:04:06.991635 1881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-q6snw" podStartSLOduration=1.6125424929999999 podStartE2EDuration="3.991621656s" podCreationTimestamp="2024-12-13 04:04:03 +0000 UTC" firstStartedPulling="2024-12-13 04:04:04.114982274 +0000 UTC m=+53.703602389" lastFinishedPulling="2024-12-13 04:04:06.49406144 +0000 UTC m=+56.082681552" observedRunningTime="2024-12-13 04:04:06.979093757 +0000 UTC m=+56.567713940" watchObservedRunningTime="2024-12-13 04:04:06.991621656 +0000 UTC m=+56.580241773" Dec 13 04:04:06.999185 systemd[1]: Started cri-containerd-3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b.scope. Dec 13 04:04:07.011469 env[1562]: time="2024-12-13T04:04:07.011411784Z" level=info msg="StartContainer for \"3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b\" returns successfully" Dec 13 04:04:07.012639 systemd[1]: cri-containerd-3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b.scope: Deactivated successfully. Dec 13 04:04:07.180210 env[1562]: time="2024-12-13T04:04:07.180079276Z" level=info msg="shim disconnected" id=3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b Dec 13 04:04:07.180210 env[1562]: time="2024-12-13T04:04:07.180207314Z" level=warning msg="cleaning up after shim disconnected" id=3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b namespace=k8s.io Dec 13 04:04:07.180818 env[1562]: time="2024-12-13T04:04:07.180237838Z" level=info msg="cleaning up dead shim" Dec 13 04:04:07.196542 env[1562]: time="2024-12-13T04:04:07.196461026Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" Dec 13 04:04:07.714971 kubelet[1881]: E1213 04:04:07.714852 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:07.981928 env[1562]: time="2024-12-13T04:04:07.981739883Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:04:07.985158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cc30e4f32163ecc0ab88f1ce9d92d988fa145bf598eecc89611e8dd7ecac04b-rootfs.mount: Deactivated successfully. Dec 13 04:04:07.989019 env[1562]: time="2024-12-13T04:04:07.988982410Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055\"" Dec 13 04:04:07.989322 env[1562]: time="2024-12-13T04:04:07.989309749Z" level=info msg="StartContainer for \"6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055\"" Dec 13 04:04:07.998457 systemd[1]: Started cri-containerd-6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055.scope. Dec 13 04:04:08.010885 systemd[1]: cri-containerd-6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055.scope: Deactivated successfully. Dec 13 04:04:08.011145 env[1562]: time="2024-12-13T04:04:08.010863008Z" level=info msg="StartContainer for \"6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055\" returns successfully" Dec 13 04:04:08.037262 env[1562]: time="2024-12-13T04:04:08.037228291Z" level=info msg="shim disconnected" id=6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055 Dec 13 04:04:08.037398 env[1562]: time="2024-12-13T04:04:08.037262594Z" level=warning msg="cleaning up after shim disconnected" id=6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055 namespace=k8s.io Dec 13 04:04:08.037398 env[1562]: time="2024-12-13T04:04:08.037274940Z" level=info msg="cleaning up dead shim" Dec 13 04:04:08.042487 env[1562]: time="2024-12-13T04:04:08.042461287Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3991 runtime=io.containerd.runc.v2\n" Dec 13 04:04:08.715133 kubelet[1881]: E1213 04:04:08.715017 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:08.984677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c0e9720ba5c66e90f96426296e890448dba6eea5e9c1d98c69b14d6808dd055-rootfs.mount: Deactivated successfully. Dec 13 04:04:08.986100 env[1562]: time="2024-12-13T04:04:08.986067023Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:04:08.991838 env[1562]: time="2024-12-13T04:04:08.991787605Z" level=info msg="CreateContainer within sandbox \"dfab12d249a92a0b735b20261d8e5cabd32f10c1e3fefaa0e2fc4bf74d0ffcd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34c907489c339029511e96bffc45c7993d5552b6ec3f44025848015b29c80351\"" Dec 13 04:04:08.992076 env[1562]: time="2024-12-13T04:04:08.992030503Z" level=info msg="StartContainer for \"34c907489c339029511e96bffc45c7993d5552b6ec3f44025848015b29c80351\"" Dec 13 04:04:09.001200 systemd[1]: Started cri-containerd-34c907489c339029511e96bffc45c7993d5552b6ec3f44025848015b29c80351.scope. Dec 13 04:04:09.015286 env[1562]: time="2024-12-13T04:04:09.015226292Z" level=info msg="StartContainer for \"34c907489c339029511e96bffc45c7993d5552b6ec3f44025848015b29c80351\" returns successfully" Dec 13 04:04:09.176504 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 04:04:09.715513 kubelet[1881]: E1213 04:04:09.715401 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:10.006145 kubelet[1881]: I1213 04:04:10.005742 1881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ww4mq" podStartSLOduration=5.005724008 podStartE2EDuration="5.005724008s" podCreationTimestamp="2024-12-13 04:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:04:10.005540766 +0000 UTC m=+59.594160891" watchObservedRunningTime="2024-12-13 04:04:10.005724008 +0000 UTC m=+59.594344122" Dec 13 04:04:10.676148 kubelet[1881]: E1213 04:04:10.676100 1881 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:10.684429 env[1562]: time="2024-12-13T04:04:10.684409241Z" level=info msg="StopPodSandbox for \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\"" Dec 13 04:04:10.684683 env[1562]: time="2024-12-13T04:04:10.684504545Z" level=info msg="TearDown network for sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" successfully" Dec 13 04:04:10.684683 env[1562]: time="2024-12-13T04:04:10.684538020Z" level=info msg="StopPodSandbox for \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" returns successfully" Dec 13 04:04:10.684731 env[1562]: time="2024-12-13T04:04:10.684717388Z" level=info msg="RemovePodSandbox for \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\"" Dec 13 04:04:10.684757 env[1562]: time="2024-12-13T04:04:10.684732376Z" level=info msg="Forcibly stopping sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\"" Dec 13 04:04:10.684777 env[1562]: time="2024-12-13T04:04:10.684767343Z" level=info msg="TearDown network for sandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" successfully" Dec 13 04:04:10.686742 env[1562]: time="2024-12-13T04:04:10.686731296Z" level=info msg="RemovePodSandbox \"6986644e443a47d54d8db8d69cddfc505e58c27e1bfb3d36869a62b2d98d4925\" returns successfully" Dec 13 04:04:10.716174 kubelet[1881]: E1213 04:04:10.716123 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:11.716982 kubelet[1881]: E1213 04:04:11.716868 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:12.229558 systemd-networkd[1323]: lxc_health: Link UP Dec 13 04:04:12.254634 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:04:12.255502 systemd-networkd[1323]: lxc_health: Gained carrier Dec 13 04:04:12.717559 kubelet[1881]: E1213 04:04:12.717535 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:13.672589 systemd-networkd[1323]: lxc_health: Gained IPv6LL Dec 13 04:04:13.718408 kubelet[1881]: E1213 04:04:13.718348 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:14.719480 kubelet[1881]: E1213 04:04:14.719345 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:15.719981 kubelet[1881]: E1213 04:04:15.719870 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:16.720194 kubelet[1881]: E1213 04:04:16.720078 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:17.721506 kubelet[1881]: E1213 04:04:17.721315 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:18.722704 kubelet[1881]: E1213 04:04:18.722584 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:19.723260 kubelet[1881]: E1213 04:04:19.723177 1881 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"