Dec 13 14:41:34.561057 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:41:34.561070 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:41:34.561077 kernel: BIOS-provided physical RAM map: Dec 13 14:41:34.561081 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 14:41:34.561084 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 14:41:34.561088 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 14:41:34.561092 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 14:41:34.561096 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 14:41:34.561100 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819e0fff] usable Dec 13 14:41:34.561104 kernel: BIOS-e820: [mem 0x00000000819e1000-0x00000000819e1fff] ACPI NVS Dec 13 14:41:34.561109 kernel: BIOS-e820: [mem 0x00000000819e2000-0x00000000819e2fff] reserved Dec 13 14:41:34.561112 kernel: BIOS-e820: [mem 0x00000000819e3000-0x000000008afccfff] usable Dec 13 14:41:34.561116 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Dec 13 14:41:34.561120 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Dec 13 14:41:34.561125 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Dec 13 14:41:34.561130 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Dec 13 14:41:34.561134 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Dec 13 14:41:34.561139 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Dec 13 14:41:34.561143 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 14:41:34.561147 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 14:41:34.561151 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 14:41:34.561156 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 14:41:34.561160 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 14:41:34.561164 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Dec 13 14:41:34.561168 kernel: NX (Execute Disable) protection: active Dec 13 14:41:34.561172 kernel: SMBIOS 3.2.1 present. Dec 13 14:41:34.561177 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Dec 13 14:41:34.561181 kernel: tsc: Detected 3400.000 MHz processor Dec 13 14:41:34.561186 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 14:41:34.561190 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:41:34.561195 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:41:34.561199 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Dec 13 14:41:34.561203 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:41:34.561208 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Dec 13 14:41:34.561212 kernel: Using GB pages for direct mapping Dec 13 14:41:34.561217 kernel: ACPI: Early table checksum verification disabled Dec 13 14:41:34.561222 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 14:41:34.561226 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 14:41:34.561230 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Dec 13 14:41:34.561235 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 14:41:34.561241 kernel: ACPI: FACS 0x000000008C66CF80 000040 Dec 13 14:41:34.561246 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Dec 13 14:41:34.561251 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Dec 13 14:41:34.561256 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 14:41:34.561261 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 14:41:34.561265 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 14:41:34.561270 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 14:41:34.561275 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 14:41:34.561279 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 14:41:34.561284 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 14:41:34.561289 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 14:41:34.561294 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 14:41:34.561299 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 14:41:34.561304 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 14:41:34.561308 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 14:41:34.561313 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 14:41:34.561318 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 14:41:34.561322 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 14:41:34.561328 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 14:41:34.561333 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Dec 13 14:41:34.561337 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 14:41:34.561342 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 14:41:34.561347 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 14:41:34.561352 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Dec 13 14:41:34.561356 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 14:41:34.561361 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 14:41:34.561366 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 14:41:34.561371 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 14:41:34.561376 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 14:41:34.561381 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Dec 13 14:41:34.561385 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Dec 13 14:41:34.561390 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Dec 13 14:41:34.561395 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Dec 13 14:41:34.561399 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Dec 13 14:41:34.561404 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Dec 13 14:41:34.561408 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Dec 13 14:41:34.561414 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Dec 13 14:41:34.561419 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Dec 13 14:41:34.561423 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Dec 13 14:41:34.561428 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Dec 13 14:41:34.561433 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Dec 13 14:41:34.561437 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Dec 13 14:41:34.561442 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Dec 13 14:41:34.561446 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Dec 13 14:41:34.561451 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Dec 13 14:41:34.561460 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Dec 13 14:41:34.561481 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Dec 13 14:41:34.561486 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Dec 13 14:41:34.561491 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Dec 13 14:41:34.561496 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Dec 13 14:41:34.561500 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Dec 13 14:41:34.561521 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Dec 13 14:41:34.561526 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Dec 13 14:41:34.561532 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Dec 13 14:41:34.561536 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Dec 13 14:41:34.561541 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Dec 13 14:41:34.561546 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Dec 13 14:41:34.561550 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Dec 13 14:41:34.561555 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Dec 13 14:41:34.561559 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Dec 13 14:41:34.561564 kernel: No NUMA configuration found Dec 13 14:41:34.561569 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Dec 13 14:41:34.561574 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Dec 13 14:41:34.561579 kernel: Zone ranges: Dec 13 14:41:34.561584 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:41:34.561588 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:41:34.561593 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Dec 13 14:41:34.561598 kernel: Movable zone start for each node Dec 13 14:41:34.561602 kernel: Early memory node ranges Dec 13 14:41:34.561607 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 14:41:34.561612 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 14:41:34.561616 kernel: node 0: [mem 0x0000000040400000-0x00000000819e0fff] Dec 13 14:41:34.561622 kernel: node 0: [mem 0x00000000819e3000-0x000000008afccfff] Dec 13 14:41:34.561626 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Dec 13 14:41:34.561631 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Dec 13 14:41:34.561636 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Dec 13 14:41:34.561640 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Dec 13 14:41:34.561645 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:41:34.561653 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 14:41:34.561658 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 14:41:34.561663 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 14:41:34.561668 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Dec 13 14:41:34.561674 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Dec 13 14:41:34.561679 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Dec 13 14:41:34.561684 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Dec 13 14:41:34.561689 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 14:41:34.561694 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 14:41:34.561699 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 14:41:34.561704 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 14:41:34.561710 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 14:41:34.561715 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 14:41:34.561720 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 14:41:34.561725 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 14:41:34.561730 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 14:41:34.561735 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 14:41:34.561740 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 14:41:34.561744 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 14:41:34.561749 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 14:41:34.561755 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 14:41:34.561760 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 14:41:34.561765 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 14:41:34.561770 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 14:41:34.561775 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 14:41:34.561780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:41:34.561785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:41:34.561790 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:41:34.561795 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:41:34.561801 kernel: TSC deadline timer available Dec 13 14:41:34.561806 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 14:41:34.561811 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Dec 13 14:41:34.561816 kernel: Booting paravirtualized kernel on bare hardware Dec 13 14:41:34.561821 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:41:34.561826 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 14:41:34.561831 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 14:41:34.561836 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 14:41:34.561841 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 14:41:34.561846 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Dec 13 14:41:34.561851 kernel: Policy zone: Normal Dec 13 14:41:34.561857 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:41:34.561862 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:41:34.561867 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 14:41:34.561872 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 14:41:34.561877 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:41:34.561883 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 730116K reserved, 0K cma-reserved) Dec 13 14:41:34.561888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 14:41:34.561893 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:41:34.561898 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:41:34.561903 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:41:34.561909 kernel: rcu: RCU event tracing is enabled. Dec 13 14:41:34.561914 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 14:41:34.561919 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:41:34.561924 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:41:34.561930 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:41:34.561935 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 14:41:34.561940 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 14:41:34.561945 kernel: random: crng init done Dec 13 14:41:34.561950 kernel: Console: colour dummy device 80x25 Dec 13 14:41:34.561955 kernel: printk: console [tty0] enabled Dec 13 14:41:34.561960 kernel: printk: console [ttyS1] enabled Dec 13 14:41:34.561965 kernel: ACPI: Core revision 20210730 Dec 13 14:41:34.561970 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Dec 13 14:41:34.561975 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:41:34.561980 kernel: DMAR: Host address width 39 Dec 13 14:41:34.561985 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 14:41:34.561990 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 14:41:34.561995 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Dec 13 14:41:34.562000 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Dec 13 14:41:34.562005 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 14:41:34.562010 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 14:41:34.562015 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 14:41:34.562020 kernel: x2apic enabled Dec 13 14:41:34.562026 kernel: Switched APIC routing to cluster x2apic. Dec 13 14:41:34.562031 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 14:41:34.562036 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 14:41:34.562041 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 14:41:34.562046 kernel: process: using mwait in idle threads Dec 13 14:41:34.562051 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:41:34.562056 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:41:34.562061 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:41:34.562066 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:41:34.562072 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 14:41:34.562077 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 14:41:34.562082 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 14:41:34.562087 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:41:34.562092 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 14:41:34.562097 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 14:41:34.562101 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:41:34.562106 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:41:34.562111 kernel: TAA: Mitigation: TSX disabled Dec 13 14:41:34.562116 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 14:41:34.562121 kernel: SRBDS: Mitigation: Microcode Dec 13 14:41:34.562127 kernel: GDS: Vulnerable: No microcode Dec 13 14:41:34.562132 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:41:34.562137 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:41:34.562142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:41:34.562147 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:41:34.562152 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:41:34.562157 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:41:34.562162 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:41:34.562166 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:41:34.562171 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 14:41:34.562176 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:41:34.562182 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:41:34.562187 kernel: LSM: Security Framework initializing Dec 13 14:41:34.562192 kernel: SELinux: Initializing. Dec 13 14:41:34.562197 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:41:34.562202 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:41:34.562207 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 14:41:34.562212 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 14:41:34.562217 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 14:41:34.562222 kernel: ... version: 4 Dec 13 14:41:34.562227 kernel: ... bit width: 48 Dec 13 14:41:34.562232 kernel: ... generic registers: 4 Dec 13 14:41:34.562237 kernel: ... value mask: 0000ffffffffffff Dec 13 14:41:34.562242 kernel: ... max period: 00007fffffffffff Dec 13 14:41:34.562247 kernel: ... fixed-purpose events: 3 Dec 13 14:41:34.562252 kernel: ... event mask: 000000070000000f Dec 13 14:41:34.562257 kernel: signal: max sigframe size: 2032 Dec 13 14:41:34.562262 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:41:34.562267 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 14:41:34.562272 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:41:34.562277 kernel: x86: Booting SMP configuration: Dec 13 14:41:34.562283 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 14:41:34.562288 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:41:34.562293 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 14:41:34.562298 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 14:41:34.562303 kernel: smpboot: Max logical packages: 1 Dec 13 14:41:34.562308 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 14:41:34.562313 kernel: devtmpfs: initialized Dec 13 14:41:34.562318 kernel: x86/mm: Memory block size: 128MB Dec 13 14:41:34.562323 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819e1000-0x819e1fff] (4096 bytes) Dec 13 14:41:34.562329 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Dec 13 14:41:34.562334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:41:34.562339 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 14:41:34.562344 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:41:34.562349 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:41:34.562354 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:41:34.562359 kernel: audit: type=2000 audit(1734100889.041:1): state=initialized audit_enabled=0 res=1 Dec 13 14:41:34.562364 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:41:34.562370 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:41:34.562375 kernel: cpuidle: using governor menu Dec 13 14:41:34.562380 kernel: ACPI: bus type PCI registered Dec 13 14:41:34.562385 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:41:34.562390 kernel: dca service started, version 1.12.1 Dec 13 14:41:34.562395 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 14:41:34.562400 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 14:41:34.562405 kernel: PCI: Using configuration type 1 for base access Dec 13 14:41:34.562410 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 14:41:34.562415 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:41:34.562420 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:41:34.562425 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:41:34.562430 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:41:34.562435 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:41:34.562440 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:41:34.562445 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:41:34.562450 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:41:34.562455 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:41:34.562463 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:41:34.562483 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 14:41:34.562489 kernel: ACPI: Dynamic OEM Table Load: Dec 13 14:41:34.562494 kernel: ACPI: SSDT 0xFFFF89E680218800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 14:41:34.562499 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 14:41:34.562504 kernel: ACPI: Dynamic OEM Table Load: Dec 13 14:41:34.562523 kernel: ACPI: SSDT 0xFFFF89E681AE0800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 14:41:34.562528 kernel: ACPI: Dynamic OEM Table Load: Dec 13 14:41:34.562533 kernel: ACPI: SSDT 0xFFFF89E681A53000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 14:41:34.562538 kernel: ACPI: Dynamic OEM Table Load: Dec 13 14:41:34.562544 kernel: ACPI: SSDT 0xFFFF89E681B4B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 14:41:34.562549 kernel: ACPI: Dynamic OEM Table Load: Dec 13 14:41:34.562553 kernel: ACPI: SSDT 0xFFFF89E68014F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 14:41:34.562558 kernel: ACPI: Dynamic OEM Table Load: Dec 13 14:41:34.562563 kernel: ACPI: SSDT 0xFFFF89E681AE3000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 14:41:34.562568 kernel: ACPI: Interpreter enabled Dec 13 14:41:34.562573 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:41:34.562578 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:41:34.562583 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 14:41:34.562589 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 14:41:34.562594 kernel: HEST: Table parsing has been initialized. Dec 13 14:41:34.562599 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 14:41:34.562604 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:41:34.562609 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 14:41:34.562614 kernel: ACPI: PM: Power Resource [USBC] Dec 13 14:41:34.562619 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 14:41:34.562624 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 14:41:34.562629 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 14:41:34.562635 kernel: ACPI: PM: Power Resource [WRST] Dec 13 14:41:34.562640 kernel: ACPI: PM: Power Resource [FN00] Dec 13 14:41:34.562644 kernel: ACPI: PM: Power Resource [FN01] Dec 13 14:41:34.562649 kernel: ACPI: PM: Power Resource [FN02] Dec 13 14:41:34.562654 kernel: ACPI: PM: Power Resource [FN03] Dec 13 14:41:34.562659 kernel: ACPI: PM: Power Resource [FN04] Dec 13 14:41:34.562664 kernel: ACPI: PM: Power Resource [PIN] Dec 13 14:41:34.562669 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 14:41:34.562736 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:41:34.562783 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 14:41:34.562824 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 14:41:34.562831 kernel: PCI host bridge to bus 0000:00 Dec 13 14:41:34.562875 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:41:34.562912 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:41:34.562948 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:41:34.562985 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Dec 13 14:41:34.563023 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 14:41:34.563059 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 14:41:34.563108 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 14:41:34.563157 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 14:41:34.563200 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.563245 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 14:41:34.563288 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Dec 13 14:41:34.563335 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 14:41:34.563378 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Dec 13 14:41:34.563423 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 14:41:34.563485 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Dec 13 14:41:34.563545 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 14:41:34.563593 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 14:41:34.563634 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Dec 13 14:41:34.563675 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Dec 13 14:41:34.563719 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 14:41:34.563761 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 14:41:34.563807 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 14:41:34.563850 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 14:41:34.563895 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 14:41:34.563936 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Dec 13 14:41:34.563978 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 14:41:34.564022 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 14:41:34.564064 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Dec 13 14:41:34.564104 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 14:41:34.564151 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 14:41:34.564191 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Dec 13 14:41:34.564231 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 14:41:34.564276 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 14:41:34.564317 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Dec 13 14:41:34.564360 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Dec 13 14:41:34.564406 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Dec 13 14:41:34.564449 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Dec 13 14:41:34.564509 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Dec 13 14:41:34.564550 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Dec 13 14:41:34.564592 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 14:41:34.564637 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 14:41:34.564681 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.564726 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 14:41:34.564771 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.564819 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 14:41:34.564862 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.564910 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 14:41:34.564952 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.565001 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Dec 13 14:41:34.565045 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.565093 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 14:41:34.565134 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 14:41:34.565182 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 14:41:34.565228 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 14:41:34.565269 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Dec 13 14:41:34.565312 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 14:41:34.565359 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 14:41:34.565402 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 14:41:34.565452 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 14:41:34.565500 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 14:41:34.565557 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Dec 13 14:41:34.565600 kernel: pci 0000:01:00.0: PME# supported from D3cold Dec 13 14:41:34.565644 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 14:41:34.565687 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 14:41:34.565735 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 14:41:34.565782 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 14:41:34.565824 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Dec 13 14:41:34.565866 kernel: pci 0000:01:00.1: PME# supported from D3cold Dec 13 14:41:34.565909 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 14:41:34.565951 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 14:41:34.565994 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 14:41:34.566035 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 14:41:34.566079 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 14:41:34.566121 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 14:41:34.566168 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Dec 13 14:41:34.566212 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Dec 13 14:41:34.566255 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Dec 13 14:41:34.566298 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 14:41:34.566341 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Dec 13 14:41:34.566383 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.566427 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 14:41:34.566491 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 14:41:34.566553 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 14:41:34.566600 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 14:41:34.566643 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 14:41:34.566688 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Dec 13 14:41:34.566730 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 14:41:34.566775 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Dec 13 14:41:34.566818 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 14:41:34.566860 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 14:41:34.566901 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 14:41:34.566942 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 14:41:34.566985 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 14:41:34.567032 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 14:41:34.567078 kernel: pci 0000:06:00.0: enabling Extended Tags Dec 13 14:41:34.567121 kernel: pci 0000:06:00.0: supports D1 D2 Dec 13 14:41:34.567164 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:41:34.567205 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 14:41:34.567248 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 14:41:34.567341 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 14:41:34.567392 kernel: pci_bus 0000:07: extended config space not accessible Dec 13 14:41:34.567444 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 14:41:34.567495 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Dec 13 14:41:34.567561 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Dec 13 14:41:34.567606 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 14:41:34.567652 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:41:34.567697 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 14:41:34.567742 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:41:34.567785 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 14:41:34.567830 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 14:41:34.567874 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 14:41:34.567881 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 14:41:34.567887 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 14:41:34.567893 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 14:41:34.567898 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 14:41:34.567903 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 14:41:34.567909 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 14:41:34.567914 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 14:41:34.567921 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 14:41:34.567927 kernel: iommu: Default domain type: Translated Dec 13 14:41:34.567932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:41:34.567976 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Dec 13 14:41:34.568021 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:41:34.568067 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Dec 13 14:41:34.568074 kernel: vgaarb: loaded Dec 13 14:41:34.568080 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:41:34.568086 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:41:34.568092 kernel: PTP clock support registered Dec 13 14:41:34.568097 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:41:34.568103 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:41:34.568108 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 14:41:34.568113 kernel: e820: reserve RAM buffer [mem 0x819e1000-0x83ffffff] Dec 13 14:41:34.568118 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Dec 13 14:41:34.568124 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Dec 13 14:41:34.568129 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Dec 13 14:41:34.568135 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Dec 13 14:41:34.568140 kernel: clocksource: Switched to clocksource tsc-early Dec 13 14:41:34.568145 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:41:34.568151 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:41:34.568156 kernel: pnp: PnP ACPI init Dec 13 14:41:34.568199 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 14:41:34.568243 kernel: pnp 00:02: [dma 0 disabled] Dec 13 14:41:34.568284 kernel: pnp 00:03: [dma 0 disabled] Dec 13 14:41:34.568329 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 14:41:34.568366 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 14:41:34.568407 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 14:41:34.568447 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 14:41:34.568512 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 14:41:34.568569 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 14:41:34.568608 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 14:41:34.568646 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 14:41:34.568682 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 14:41:34.568720 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 14:41:34.568756 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 14:41:34.568796 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 14:41:34.568834 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 14:41:34.568872 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 14:41:34.568910 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 14:41:34.568946 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 14:41:34.568985 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 14:41:34.569022 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 14:41:34.569062 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 14:41:34.569070 kernel: pnp: PnP ACPI: found 10 devices Dec 13 14:41:34.569077 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:41:34.569082 kernel: NET: Registered PF_INET protocol family Dec 13 14:41:34.569087 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:41:34.569093 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:41:34.569098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:41:34.569105 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:41:34.569110 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:41:34.569115 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 14:41:34.569121 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 14:41:34.569127 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 14:41:34.569132 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:41:34.569138 kernel: NET: Registered PF_XDP protocol family Dec 13 14:41:34.569180 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Dec 13 14:41:34.569222 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Dec 13 14:41:34.569265 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Dec 13 14:41:34.569309 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 14:41:34.569352 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 14:41:34.569397 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 14:41:34.569441 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 14:41:34.569526 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 14:41:34.569569 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 14:41:34.569610 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 14:41:34.569651 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 14:41:34.569695 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 14:41:34.569736 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 14:41:34.569779 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 14:41:34.569820 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 14:41:34.569862 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 14:41:34.569904 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 14:41:34.569946 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 14:41:34.569990 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 14:41:34.570032 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 14:41:34.570076 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 14:41:34.570117 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 14:41:34.570160 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 14:41:34.570201 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 14:41:34.570241 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 14:41:34.570278 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:41:34.570314 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:41:34.570353 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:41:34.570389 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Dec 13 14:41:34.570426 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 14:41:34.570494 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Dec 13 14:41:34.570549 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 14:41:34.570594 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Dec 13 14:41:34.570634 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Dec 13 14:41:34.570677 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 14:41:34.570716 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Dec 13 14:41:34.570759 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Dec 13 14:41:34.570797 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Dec 13 14:41:34.570838 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 14:41:34.570879 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Dec 13 14:41:34.570887 kernel: PCI: CLS 64 bytes, default 64 Dec 13 14:41:34.570893 kernel: DMAR: No ATSR found Dec 13 14:41:34.570899 kernel: DMAR: No SATC found Dec 13 14:41:34.570904 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 14:41:34.570946 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 14:41:34.570989 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 14:41:34.571031 kernel: pci 0000:00:08.0: Adding to iommu group 2 Dec 13 14:41:34.571073 kernel: pci 0000:00:12.0: Adding to iommu group 3 Dec 13 14:41:34.571116 kernel: pci 0000:00:14.0: Adding to iommu group 4 Dec 13 14:41:34.571157 kernel: pci 0000:00:14.2: Adding to iommu group 4 Dec 13 14:41:34.571198 kernel: pci 0000:00:15.0: Adding to iommu group 5 Dec 13 14:41:34.571239 kernel: pci 0000:00:15.1: Adding to iommu group 5 Dec 13 14:41:34.571279 kernel: pci 0000:00:16.0: Adding to iommu group 6 Dec 13 14:41:34.571320 kernel: pci 0000:00:16.1: Adding to iommu group 6 Dec 13 14:41:34.571361 kernel: pci 0000:00:16.4: Adding to iommu group 6 Dec 13 14:41:34.571401 kernel: pci 0000:00:17.0: Adding to iommu group 7 Dec 13 14:41:34.571442 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Dec 13 14:41:34.571530 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Dec 13 14:41:34.571574 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Dec 13 14:41:34.571615 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Dec 13 14:41:34.571657 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Dec 13 14:41:34.571698 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Dec 13 14:41:34.571740 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Dec 13 14:41:34.571782 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Dec 13 14:41:34.571823 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Dec 13 14:41:34.571868 kernel: pci 0000:01:00.0: Adding to iommu group 1 Dec 13 14:41:34.571911 kernel: pci 0000:01:00.1: Adding to iommu group 1 Dec 13 14:41:34.571954 kernel: pci 0000:03:00.0: Adding to iommu group 15 Dec 13 14:41:34.571997 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 14:41:34.572041 kernel: pci 0000:06:00.0: Adding to iommu group 17 Dec 13 14:41:34.572086 kernel: pci 0000:07:00.0: Adding to iommu group 17 Dec 13 14:41:34.572094 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 14:41:34.572099 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:41:34.572106 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Dec 13 14:41:34.572111 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Dec 13 14:41:34.572117 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 14:41:34.572122 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 14:41:34.572128 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 14:41:34.572174 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 14:41:34.572182 kernel: Initialise system trusted keyrings Dec 13 14:41:34.572187 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 14:41:34.572194 kernel: Key type asymmetric registered Dec 13 14:41:34.572199 kernel: Asymmetric key parser 'x509' registered Dec 13 14:41:34.572204 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:41:34.572210 kernel: io scheduler mq-deadline registered Dec 13 14:41:34.572215 kernel: io scheduler kyber registered Dec 13 14:41:34.572220 kernel: io scheduler bfq registered Dec 13 14:41:34.572262 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Dec 13 14:41:34.572303 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Dec 13 14:41:34.572345 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Dec 13 14:41:34.572388 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Dec 13 14:41:34.572430 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Dec 13 14:41:34.572497 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Dec 13 14:41:34.572564 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 14:41:34.572572 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 14:41:34.572578 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 14:41:34.572583 kernel: pstore: Registered erst as persistent store backend Dec 13 14:41:34.572590 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:41:34.572595 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:41:34.572601 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:41:34.572606 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:41:34.572612 kernel: hpet_acpi_add: no address or irqs in _CRS Dec 13 14:41:34.572655 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 14:41:34.572663 kernel: i8042: PNP: No PS/2 controller found. Dec 13 14:41:34.572701 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 14:41:34.572740 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 14:41:34.572778 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T14:41:33 UTC (1734100893) Dec 13 14:41:34.572817 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 14:41:34.572824 kernel: fail to initialize ptp_kvm Dec 13 14:41:34.572829 kernel: intel_pstate: Intel P-state driver initializing Dec 13 14:41:34.572835 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 14:41:34.572840 kernel: intel_pstate: HWP enabled Dec 13 14:41:34.572846 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 14:41:34.572851 kernel: vesafb: scrolling: redraw Dec 13 14:41:34.572857 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 14:41:34.572863 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000d087c9c9, using 768k, total 768k Dec 13 14:41:34.572868 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:41:34.572874 kernel: fb0: VESA VGA frame buffer device Dec 13 14:41:34.572879 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:41:34.572884 kernel: Segment Routing with IPv6 Dec 13 14:41:34.572890 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:41:34.572895 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:41:34.572900 kernel: Key type dns_resolver registered Dec 13 14:41:34.572906 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 14:41:34.572912 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 14:41:34.572917 kernel: IPI shorthand broadcast: enabled Dec 13 14:41:34.572923 kernel: sched_clock: Marking stable (1734349493, 1339414604)->(4519038797, -1445274700) Dec 13 14:41:34.572928 kernel: registered taskstats version 1 Dec 13 14:41:34.572933 kernel: Loading compiled-in X.509 certificates Dec 13 14:41:34.572939 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:41:34.572944 kernel: Key type .fscrypt registered Dec 13 14:41:34.572949 kernel: Key type fscrypt-provisioning registered Dec 13 14:41:34.572955 kernel: pstore: Using crash dump compression: deflate Dec 13 14:41:34.572961 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:41:34.572966 kernel: ima: No architecture policies found Dec 13 14:41:34.572971 kernel: clk: Disabling unused clocks Dec 13 14:41:34.572977 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:41:34.572982 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:41:34.572988 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:41:34.572993 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:41:34.572998 kernel: Run /init as init process Dec 13 14:41:34.573004 kernel: with arguments: Dec 13 14:41:34.573010 kernel: /init Dec 13 14:41:34.573015 kernel: with environment: Dec 13 14:41:34.573020 kernel: HOME=/ Dec 13 14:41:34.573025 kernel: TERM=linux Dec 13 14:41:34.573030 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:41:34.573037 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:41:34.573044 systemd[1]: Detected architecture x86-64. Dec 13 14:41:34.573050 systemd[1]: Running in initrd. Dec 13 14:41:34.573055 systemd[1]: No hostname configured, using default hostname. Dec 13 14:41:34.573061 systemd[1]: Hostname set to . Dec 13 14:41:34.573066 systemd[1]: Initializing machine ID from random generator. Dec 13 14:41:34.573072 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:41:34.573077 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:41:34.573083 systemd[1]: Reached target cryptsetup.target. Dec 13 14:41:34.573088 systemd[1]: Reached target paths.target. Dec 13 14:41:34.573094 systemd[1]: Reached target slices.target. Dec 13 14:41:34.573099 systemd[1]: Reached target swap.target. Dec 13 14:41:34.573105 systemd[1]: Reached target timers.target. Dec 13 14:41:34.573110 systemd[1]: Listening on iscsid.socket. Dec 13 14:41:34.573116 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:41:34.573121 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:41:34.573127 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:41:34.573133 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:41:34.573138 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Dec 13 14:41:34.573144 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Dec 13 14:41:34.573149 kernel: clocksource: Switched to clocksource tsc Dec 13 14:41:34.573155 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:41:34.573160 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:41:34.573166 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:41:34.573171 systemd[1]: Reached target sockets.target. Dec 13 14:41:34.573177 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:41:34.573183 systemd[1]: Finished network-cleanup.service. Dec 13 14:41:34.573189 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:41:34.573194 systemd[1]: Starting systemd-journald.service... Dec 13 14:41:34.573200 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:41:34.573207 systemd-journald[268]: Journal started Dec 13 14:41:34.573233 systemd-journald[268]: Runtime Journal (/run/log/journal/f7bee3eff4e344aca904dc718b1c6803) is 8.0M, max 640.1M, 632.1M free. Dec 13 14:41:34.575804 systemd-modules-load[269]: Inserted module 'overlay' Dec 13 14:41:34.581000 audit: BPF prog-id=6 op=LOAD Dec 13 14:41:34.599502 kernel: audit: type=1334 audit(1734100894.581:2): prog-id=6 op=LOAD Dec 13 14:41:34.599517 systemd[1]: Starting systemd-resolved.service... Dec 13 14:41:34.648498 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:41:34.648515 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:41:34.681501 kernel: Bridge firewalling registered Dec 13 14:41:34.681550 systemd[1]: Started systemd-journald.service. Dec 13 14:41:34.695415 systemd-modules-load[269]: Inserted module 'br_netfilter' Dec 13 14:41:34.744823 kernel: audit: type=1130 audit(1734100894.703:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.698029 systemd-resolved[271]: Positive Trust Anchors: Dec 13 14:41:34.820294 kernel: SCSI subsystem initialized Dec 13 14:41:34.820306 kernel: audit: type=1130 audit(1734100894.755:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.820314 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:41:34.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.698034 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:41:34.921417 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:41:34.921449 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:41:34.921480 kernel: audit: type=1130 audit(1734100894.877:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.698053 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:41:34.995663 kernel: audit: type=1130 audit(1734100894.930:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.699564 systemd-resolved[271]: Defaulting to hostname 'linux'. Dec 13 14:41:35.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.703757 systemd[1]: Started systemd-resolved.service. Dec 13 14:41:35.103255 kernel: audit: type=1130 audit(1734100895.004:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.103267 kernel: audit: type=1130 audit(1734100895.057:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:34.755626 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:41:34.877930 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:41:34.921955 systemd-modules-load[269]: Inserted module 'dm_multipath' Dec 13 14:41:34.930759 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:41:35.004743 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:41:35.057727 systemd[1]: Reached target nss-lookup.target. Dec 13 14:41:35.112073 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:41:35.119100 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:41:35.132124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:41:35.132822 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:41:35.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.134981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:41:35.180657 kernel: audit: type=1130 audit(1734100895.132:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.195783 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:41:35.262540 kernel: audit: type=1130 audit(1734100895.195:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.254132 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:41:35.276580 dracut-cmdline[294]: dracut-dracut-053 Dec 13 14:41:35.276580 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 14:41:35.276580 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:41:35.346523 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:41:35.346536 kernel: iscsi: registered transport (tcp) Dec 13 14:41:35.399952 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:41:35.399999 kernel: QLogic iSCSI HBA Driver Dec 13 14:41:35.416570 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:41:35.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:35.426279 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:41:35.483534 kernel: raid6: avx2x4 gen() 41917 MB/s Dec 13 14:41:35.517532 kernel: raid6: avx2x4 xor() 14516 MB/s Dec 13 14:41:35.552528 kernel: raid6: avx2x2 gen() 51713 MB/s Dec 13 14:41:35.587528 kernel: raid6: avx2x2 xor() 32084 MB/s Dec 13 14:41:35.622501 kernel: raid6: avx2x1 gen() 44463 MB/s Dec 13 14:41:35.655489 kernel: raid6: avx2x1 xor() 27907 MB/s Dec 13 14:41:35.689488 kernel: raid6: sse2x4 gen() 21358 MB/s Dec 13 14:41:35.723489 kernel: raid6: sse2x4 xor() 11838 MB/s Dec 13 14:41:35.757489 kernel: raid6: sse2x2 gen() 21574 MB/s Dec 13 14:41:35.791493 kernel: raid6: sse2x2 xor() 13420 MB/s Dec 13 14:41:35.825532 kernel: raid6: sse2x1 gen() 18265 MB/s Dec 13 14:41:35.877046 kernel: raid6: sse2x1 xor() 8930 MB/s Dec 13 14:41:35.877062 kernel: raid6: using algorithm avx2x2 gen() 51713 MB/s Dec 13 14:41:35.877070 kernel: raid6: .... xor() 32084 MB/s, rmw enabled Dec 13 14:41:35.895077 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:41:35.940487 kernel: xor: automatically using best checksumming function avx Dec 13 14:41:36.020513 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:41:36.024978 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:41:36.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:36.024000 audit: BPF prog-id=7 op=LOAD Dec 13 14:41:36.024000 audit: BPF prog-id=8 op=LOAD Dec 13 14:41:36.025656 systemd[1]: Starting systemd-udevd.service... Dec 13 14:41:36.033408 systemd-udevd[475]: Using default interface naming scheme 'v252'. Dec 13 14:41:36.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:36.047827 systemd[1]: Started systemd-udevd.service. Dec 13 14:41:36.088587 dracut-pre-trigger[489]: rd.md=0: removing MD RAID activation Dec 13 14:41:36.064154 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:41:36.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:36.094944 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:41:36.106645 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:41:36.157428 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:41:36.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:36.184490 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:41:36.220845 kernel: ACPI: bus type USB registered Dec 13 14:41:36.220882 kernel: usbcore: registered new interface driver usbfs Dec 13 14:41:36.238548 kernel: usbcore: registered new interface driver hub Dec 13 14:41:36.238582 kernel: usbcore: registered new device driver usb Dec 13 14:41:36.267501 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:41:36.267537 kernel: libata version 3.00 loaded. Dec 13 14:41:36.267545 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Dec 13 14:41:36.753838 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 14:41:36.753921 kernel: AES CTR mode by8 optimization enabled Dec 13 14:41:36.753932 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 14:41:36.753940 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 14:41:36.753948 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 14:41:36.754010 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 14:41:36.754070 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 14:41:36.754132 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 14:41:36.754190 kernel: pps pps0: new PPS source ptp0 Dec 13 14:41:36.754261 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Dec 13 14:41:36.754320 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 14:41:36.754378 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 14:41:36.754445 kernel: scsi host0: ahci Dec 13 14:41:36.754514 kernel: scsi host1: ahci Dec 13 14:41:36.754575 kernel: scsi host2: ahci Dec 13 14:41:36.754635 kernel: scsi host3: ahci Dec 13 14:41:36.754697 kernel: scsi host4: ahci Dec 13 14:41:36.754763 kernel: scsi host5: ahci Dec 13 14:41:36.754825 kernel: scsi host6: ahci Dec 13 14:41:36.754884 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Dec 13 14:41:36.754893 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Dec 13 14:41:36.754901 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Dec 13 14:41:36.754909 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Dec 13 14:41:36.754918 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Dec 13 14:41:36.754926 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Dec 13 14:41:36.754934 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Dec 13 14:41:36.754942 kernel: igb 0000:03:00.0: added PHC on eth0 Dec 13 14:41:36.755004 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 14:41:36.755060 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 14:41:36.755118 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 14:41:36.755173 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:24 Dec 13 14:41:36.755233 kernel: hub 1-0:1.0: USB hub found Dec 13 14:41:36.755301 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 14:41:36.755362 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Dec 13 14:41:36.755421 kernel: hub 1-0:1.0: 16 ports detected Dec 13 14:41:36.755486 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:41:36.755545 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 14:41:36.755603 kernel: hub 2-0:1.0: USB hub found Dec 13 14:41:36.755670 kernel: pps pps1: new PPS source ptp2 Dec 13 14:41:36.755734 kernel: hub 2-0:1.0: 10 ports detected Dec 13 14:41:36.755797 kernel: igb 0000:04:00.0: added PHC on eth1 Dec 13 14:41:36.818223 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 14:41:36.818284 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 14:41:36.818340 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 14:41:36.818350 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 14:41:36.818358 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 14:41:36.818364 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:41:36.818370 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:41:36.818377 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:41:36.818383 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 14:41:36.818390 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU004, max UDMA/133 Dec 13 14:41:36.818396 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Dec 13 14:41:36.818402 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 14:41:36.818410 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Dec 13 14:41:37.542543 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 14:41:37.542751 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:25 Dec 13 14:41:37.542952 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Dec 13 14:41:37.543148 kernel: ata1.00: Features: NCQ-prio Dec 13 14:41:37.543167 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 14:41:37.543371 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 14:41:37.543391 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 14:41:37.543594 kernel: ata2.00: Features: NCQ-prio Dec 13 14:41:37.543615 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 14:41:37.735737 kernel: ata1.00: configured for UDMA/133 Dec 13 14:41:37.735749 kernel: ata2.00: configured for UDMA/133 Dec 13 14:41:37.735756 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U004 PQ: 0 ANSI: 5 Dec 13 14:41:37.735831 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Dec 13 14:41:37.735891 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Dec 13 14:41:37.735948 kernel: hub 1-14:1.0: USB hub found Dec 13 14:41:37.736011 kernel: hub 1-14:1.0: 4 ports detected Dec 13 14:41:37.736066 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Dec 13 14:41:37.736120 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:41:37.736173 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 14:41:37.736181 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 14:41:37.736188 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 14:41:37.736244 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 14:41:37.736300 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Dec 13 14:41:37.736355 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:41:37.736407 kernel: sd 1:0:0:0: [sdb] Write Protect is off Dec 13 14:41:37.736465 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:41:37.736526 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 14:41:37.736581 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 14:41:37.736634 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:41:37.736688 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:41:37.736745 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 14:41:37.736752 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 14:41:37.736759 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 14:41:37.736765 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 14:41:37.736815 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Dec 13 14:41:37.736869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:41:37.736877 kernel: GPT:9289727 != 937703087 Dec 13 14:41:37.736884 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:41:37.736891 kernel: GPT:9289727 != 937703087 Dec 13 14:41:37.736897 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:41:37.736903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:41:37.736909 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 14:41:37.736915 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:41:37.736968 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 14:41:37.737063 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Dec 13 14:41:37.737119 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (548) Dec 13 14:41:37.737129 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Dec 13 14:41:37.656991 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:41:37.771559 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:41:37.771571 kernel: usbcore: registered new interface driver usbhid Dec 13 14:41:37.716589 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:41:37.821230 kernel: usbhid: USB HID core driver Dec 13 14:41:37.821244 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 14:41:37.736687 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:41:37.843549 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 14:41:37.754277 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:41:37.881553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:41:37.881566 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 14:41:37.881573 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 14:41:37.915213 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:41:37.915222 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 14:41:37.790930 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:41:38.034512 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 14:41:38.034539 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 14:41:38.034616 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:41:38.034627 disk-uuid[689]: Primary Header is updated. Dec 13 14:41:38.034627 disk-uuid[689]: Secondary Entries is updated. Dec 13 14:41:38.034627 disk-uuid[689]: Secondary Header is updated. Dec 13 14:41:37.832044 systemd[1]: Starting disk-uuid.service... Dec 13 14:41:38.946626 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 14:41:38.966502 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:41:38.967065 disk-uuid[690]: The operation has completed successfully. Dec 13 14:41:39.006230 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:41:39.115025 kernel: kauditd_printk_skb: 10 callbacks suppressed Dec 13 14:41:39.115039 kernel: audit: type=1130 audit(1734100899.013:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.115050 kernel: audit: type=1131 audit(1734100899.013:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.006277 systemd[1]: Finished disk-uuid.service. Dec 13 14:41:39.144500 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:41:39.014154 systemd[1]: Starting verity-setup.service... Dec 13 14:41:39.172789 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:41:39.181477 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:41:39.198773 systemd[1]: Finished verity-setup.service. Dec 13 14:41:39.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.251493 kernel: audit: type=1130 audit(1734100899.206:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.279462 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:41:39.279807 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:41:39.287782 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:41:39.288175 systemd[1]: Starting ignition-setup.service... Dec 13 14:41:39.377669 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:41:39.377683 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:41:39.377694 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:41:39.377701 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:41:39.327034 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:41:39.385880 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:41:39.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.402811 systemd[1]: Finished ignition-setup.service. Dec 13 14:41:39.506787 kernel: audit: type=1130 audit(1734100899.402:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.506803 kernel: audit: type=1130 audit(1734100899.458:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.459110 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:41:39.536949 kernel: audit: type=1334 audit(1734100899.514:24): prog-id=9 op=LOAD Dec 13 14:41:39.514000 audit: BPF prog-id=9 op=LOAD Dec 13 14:41:39.515411 systemd[1]: Starting systemd-networkd.service... Dec 13 14:41:39.550841 systemd-networkd[875]: lo: Link UP Dec 13 14:41:39.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.599634 ignition[865]: Ignition 2.14.0 Dec 13 14:41:39.623598 kernel: audit: type=1130 audit(1734100899.559:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.550844 systemd-networkd[875]: lo: Gained carrier Dec 13 14:41:39.599638 ignition[865]: Stage: fetch-offline Dec 13 14:41:39.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.551139 systemd-networkd[875]: Enumeration completed Dec 13 14:41:39.748868 kernel: audit: type=1130 audit(1734100899.637:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.748882 kernel: audit: type=1130 audit(1734100899.696:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.599662 ignition[865]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:41:39.551189 systemd[1]: Started systemd-networkd.service. Dec 13 14:41:39.599678 ignition[865]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 14:41:39.551732 systemd-networkd[875]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:41:39.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.610686 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:41:39.859652 kernel: audit: type=1130 audit(1734100899.789:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.559650 systemd[1]: Reached target network.target. Dec 13 14:41:39.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.872661 iscsid[898]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:41:39.872661 iscsid[898]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:41:39.872661 iscsid[898]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:41:39.872661 iscsid[898]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:41:39.872661 iscsid[898]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:41:39.872661 iscsid[898]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:41:39.872661 iscsid[898]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:41:40.034557 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 14:41:40.034645 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Dec 13 14:41:40.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:39.610754 ignition[865]: parsed url from cmdline: "" Dec 13 14:41:39.612901 unknown[865]: fetched base config from "system" Dec 13 14:41:39.610756 ignition[865]: no config URL provided Dec 13 14:41:39.612905 unknown[865]: fetched user config from "system" Dec 13 14:41:39.610759 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:41:39.619072 systemd[1]: Starting iscsiuio.service... Dec 13 14:41:39.610774 ignition[865]: parsing config with SHA512: 89b6b7af348accfdeb61259e3040eafdfe388250134df7028b338475a9ce49d539cca276643725401f9cfe98c54956c8259bbf34a1c2c5d375c0741c9c47692b Dec 13 14:41:39.630736 systemd[1]: Started iscsiuio.service. Dec 13 14:41:39.613088 ignition[865]: fetch-offline: fetch-offline passed Dec 13 14:41:39.637773 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:41:39.613091 ignition[865]: POST message to Packet Timeline Dec 13 14:41:39.696580 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:41:39.613095 ignition[865]: POST Status error: resource requires networking Dec 13 14:41:39.697013 systemd[1]: Starting ignition-kargs.service... Dec 13 14:41:39.613133 ignition[865]: Ignition finished successfully Dec 13 14:41:39.756132 systemd[1]: Starting iscsid.service... Dec 13 14:41:39.753629 ignition[888]: Ignition 2.14.0 Dec 13 14:41:39.773756 systemd[1]: Started iscsid.service. Dec 13 14:41:39.753633 ignition[888]: Stage: kargs Dec 13 14:41:40.182559 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 14:41:39.790117 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:41:39.753689 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:41:39.849761 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:41:39.753698 ignition[888]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 14:41:39.859739 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:41:39.755012 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:41:39.880614 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:41:39.756613 ignition[888]: kargs: kargs passed Dec 13 14:41:39.888604 systemd[1]: Reached target remote-fs.target. Dec 13 14:41:39.756616 ignition[888]: POST message to Packet Timeline Dec 13 14:41:39.889079 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:41:39.756628 ignition[888]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:41:39.943984 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:41:39.759931 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47342->[::1]:53: read: connection refused Dec 13 14:41:39.959288 systemd-networkd[875]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:41:39.960249 ignition[888]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 14:41:40.178120 systemd-networkd[875]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:41:39.960917 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54322->[::1]:53: read: connection refused Dec 13 14:41:40.206934 systemd-networkd[875]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:41:40.235208 systemd-networkd[875]: enp1s0f1np1: Link UP Dec 13 14:41:40.361078 ignition[888]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 14:41:40.235430 systemd-networkd[875]: enp1s0f1np1: Gained carrier Dec 13 14:41:40.362236 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36833->[::1]:53: read: connection refused Dec 13 14:41:40.248866 systemd-networkd[875]: enp1s0f0np0: Link UP Dec 13 14:41:40.249168 systemd-networkd[875]: eno2: Link UP Dec 13 14:41:40.249433 systemd-networkd[875]: eno1: Link UP Dec 13 14:41:40.992392 systemd-networkd[875]: enp1s0f0np0: Gained carrier Dec 13 14:41:41.001681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Dec 13 14:41:41.020680 systemd-networkd[875]: enp1s0f0np0: DHCPv4 address 145.40.90.151/31, gateway 145.40.90.150 acquired from 145.40.83.140 Dec 13 14:41:41.162741 ignition[888]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 14:41:41.163271 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55024->[::1]:53: read: connection refused Dec 13 14:41:41.353069 systemd-networkd[875]: enp1s0f1np1: Gained IPv6LL Dec 13 14:41:42.764741 ignition[888]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 14:41:42.766148 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44001->[::1]:53: read: connection refused Dec 13 14:41:42.825063 systemd-networkd[875]: enp1s0f0np0: Gained IPv6LL Dec 13 14:41:45.969494 ignition[888]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 14:41:47.113771 ignition[888]: GET result: OK Dec 13 14:41:47.432196 ignition[888]: Ignition finished successfully Dec 13 14:41:47.434901 systemd[1]: Finished ignition-kargs.service. Dec 13 14:41:47.526883 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 13 14:41:47.526899 kernel: audit: type=1130 audit(1734100907.450:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:47.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:47.459641 ignition[917]: Ignition 2.14.0 Dec 13 14:41:47.452999 systemd[1]: Starting ignition-disks.service... Dec 13 14:41:47.459645 ignition[917]: Stage: disks Dec 13 14:41:47.459702 ignition[917]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:41:47.459711 ignition[917]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 14:41:47.461071 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:41:47.462243 ignition[917]: disks: disks passed Dec 13 14:41:47.462247 ignition[917]: POST message to Packet Timeline Dec 13 14:41:47.462257 ignition[917]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:41:48.535652 ignition[917]: GET result: OK Dec 13 14:41:48.855104 ignition[917]: Ignition finished successfully Dec 13 14:41:48.858436 systemd[1]: Finished ignition-disks.service. Dec 13 14:41:48.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:48.872027 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:41:48.950652 kernel: audit: type=1130 audit(1734100908.871:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:48.936687 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:41:48.936721 systemd[1]: Reached target local-fs.target. Dec 13 14:41:48.950718 systemd[1]: Reached target sysinit.target. Dec 13 14:41:48.977664 systemd[1]: Reached target basic.target. Dec 13 14:41:48.991387 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:41:49.010981 systemd-fsck[932]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:41:49.022850 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:41:49.115830 kernel: audit: type=1130 audit(1734100909.031:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:49.115846 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:41:49.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:49.037374 systemd[1]: Mounting sysroot.mount... Dec 13 14:41:49.123088 systemd[1]: Mounted sysroot.mount. Dec 13 14:41:49.136710 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:41:49.144351 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:41:49.169274 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:41:49.177970 systemd[1]: Starting flatcar-static-network.service... Dec 13 14:41:49.194568 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:41:49.194601 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:41:49.213853 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:41:49.237916 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:41:49.374345 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Dec 13 14:41:49.374363 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:41:49.374371 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:41:49.374378 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:41:49.374385 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:41:49.248891 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:41:49.438062 kernel: audit: type=1130 audit(1734100909.382:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:49.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:49.438099 coreos-metadata[939]: Dec 13 14:41:49.343 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:41:49.460698 coreos-metadata[940]: Dec 13 14:41:49.342 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:41:49.481577 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:41:49.337358 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:41:49.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:49.532668 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:41:49.572658 kernel: audit: type=1130 audit(1734100909.506:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:49.383995 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:41:49.581625 initrd-setup-root[976]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:41:49.447082 systemd[1]: Starting ignition-mount.service... Dec 13 14:41:49.598691 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:41:49.469024 systemd[1]: Starting sysroot-boot.service... Dec 13 14:41:49.615622 ignition[1016]: INFO : Ignition 2.14.0 Dec 13 14:41:49.615622 ignition[1016]: INFO : Stage: mount Dec 13 14:41:49.615622 ignition[1016]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:41:49.615622 ignition[1016]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 14:41:49.615622 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:41:49.615622 ignition[1016]: INFO : mount: mount passed Dec 13 14:41:49.615622 ignition[1016]: INFO : POST message to Packet Timeline Dec 13 14:41:49.615622 ignition[1016]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:41:49.489510 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:41:49.706822 ignition[1016]: INFO : GET result: OK Dec 13 14:41:49.489561 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:41:49.490355 systemd[1]: Finished sysroot-boot.service. Dec 13 14:41:49.994122 ignition[1016]: INFO : Ignition finished successfully Dec 13 14:41:49.996906 systemd[1]: Finished ignition-mount.service. Dec 13 14:41:50.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.069508 kernel: audit: type=1130 audit(1734100910.012:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.300387 coreos-metadata[939]: Dec 13 14:41:50.300 INFO Fetch successful Dec 13 14:41:50.314952 coreos-metadata[940]: Dec 13 14:41:50.314 INFO Fetch successful Dec 13 14:41:50.328095 coreos-metadata[939]: Dec 13 14:41:50.328 INFO wrote hostname ci-3510.3.6-a-c5d7845087 to /sysroot/etc/hostname Dec 13 14:41:50.328608 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:41:50.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.341783 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 14:41:50.523243 kernel: audit: type=1130 audit(1734100910.341:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.523257 kernel: audit: type=1130 audit(1734100910.409:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.523265 kernel: audit: type=1131 audit(1734100910.409:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:50.341822 systemd[1]: Finished flatcar-static-network.service. Dec 13 14:41:50.410127 systemd[1]: Starting ignition-files.service... Dec 13 14:41:50.582545 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1030) Dec 13 14:41:50.582560 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:41:50.532289 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:41:50.655539 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:41:50.655550 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:41:50.655557 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:41:50.669444 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:41:50.686605 ignition[1049]: INFO : Ignition 2.14.0 Dec 13 14:41:50.686605 ignition[1049]: INFO : Stage: files Dec 13 14:41:50.686605 ignition[1049]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:41:50.686605 ignition[1049]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 14:41:50.686605 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:41:50.686605 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:41:50.686605 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:41:50.686605 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:41:50.802690 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1057) Dec 13 14:41:50.689064 unknown[1049]: wrote ssh authorized keys file for user: core Dec 13 14:41:50.811710 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1155969620" Dec 13 14:41:50.811710 ignition[1049]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1155969620": device or resource busy Dec 13 14:41:50.811710 ignition[1049]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1155969620", trying btrfs: device or resource busy Dec 13 14:41:50.811710 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1155969620" Dec 13 14:41:51.072805 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1155969620" Dec 13 14:41:51.072805 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem1155969620" Dec 13 14:41:51.072805 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem1155969620" Dec 13 14:41:51.072805 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 14:41:51.072805 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:41:51.072805 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:41:51.252497 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 14:41:51.402021 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:41:51.402021 ignition[1049]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:41:51.402021 ignition[1049]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:41:51.402021 ignition[1049]: INFO : files: op(c): [started] processing unit "packet-phone-home.service" Dec 13 14:41:51.402021 ignition[1049]: INFO : files: op(c): [finished] processing unit "packet-phone-home.service" Dec 13 14:41:51.402021 ignition[1049]: INFO : files: op(d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:41:51.485751 ignition[1049]: INFO : files: op(d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:41:51.485751 ignition[1049]: INFO : files: op(e): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 14:41:51.485751 ignition[1049]: INFO : files: op(e): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 14:41:51.485751 ignition[1049]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:41:51.485751 ignition[1049]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:41:51.485751 ignition[1049]: INFO : files: files passed Dec 13 14:41:51.485751 ignition[1049]: INFO : POST message to Packet Timeline Dec 13 14:41:51.485751 ignition[1049]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:41:52.260733 ignition[1049]: INFO : GET result: OK Dec 13 14:41:52.595372 ignition[1049]: INFO : Ignition finished successfully Dec 13 14:41:52.598368 systemd[1]: Finished ignition-files.service. Dec 13 14:41:52.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.671528 kernel: audit: type=1130 audit(1734100912.612:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.618550 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:41:52.679719 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:41:52.710700 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:41:52.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.680161 systemd[1]: Starting ignition-quench.service... Dec 13 14:41:52.846418 kernel: audit: type=1130 audit(1734100912.721:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.846435 kernel: audit: type=1130 audit(1734100912.789:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.689941 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:41:52.913690 kernel: audit: type=1131 audit(1734100912.789:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.721940 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:41:52.722004 systemd[1]: Finished ignition-quench.service. Dec 13 14:41:53.051630 kernel: audit: type=1130 audit(1734100912.937:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.051643 kernel: audit: type=1131 audit(1734100912.937:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.789736 systemd[1]: Reached target ignition-complete.target. Dec 13 14:41:52.874944 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:41:52.925642 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:41:53.156591 kernel: audit: type=1130 audit(1734100913.097:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:52.925683 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:41:52.937762 systemd[1]: Reached target initrd-fs.target. Dec 13 14:41:53.059675 systemd[1]: Reached target initrd.target. Dec 13 14:41:53.059809 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:41:53.060160 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:41:53.080863 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:41:53.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.098349 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:41:53.316794 kernel: audit: type=1131 audit(1734100913.237:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.166688 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:41:53.179794 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:41:53.195767 systemd[1]: Stopped target timers.target. Dec 13 14:41:53.220868 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:41:53.220991 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:41:53.238045 systemd[1]: Stopped target initrd.target. Dec 13 14:41:53.309824 systemd[1]: Stopped target basic.target. Dec 13 14:41:53.324832 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:41:53.339830 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:41:53.356844 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:41:53.372898 systemd[1]: Stopped target remote-fs.target. Dec 13 14:41:53.556486 kernel: audit: type=1131 audit(1734100913.485:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.390036 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:41:53.406154 systemd[1]: Stopped target sysinit.target. Dec 13 14:41:53.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.422166 systemd[1]: Stopped target local-fs.target. Dec 13 14:41:53.658709 kernel: audit: type=1131 audit(1734100913.581:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.438152 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:41:53.455136 systemd[1]: Stopped target swap.target. Dec 13 14:41:53.470017 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:41:53.470380 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:41:53.486356 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:41:53.564811 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:41:53.564894 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:41:53.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.581890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:41:53.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.581967 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:41:53.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.651915 systemd[1]: Stopped target paths.target. Dec 13 14:41:53.813576 ignition[1098]: INFO : Ignition 2.14.0 Dec 13 14:41:53.813576 ignition[1098]: INFO : Stage: umount Dec 13 14:41:53.813576 ignition[1098]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:41:53.813576 ignition[1098]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 14:41:53.813576 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:41:53.813576 ignition[1098]: INFO : umount: umount passed Dec 13 14:41:53.813576 ignition[1098]: INFO : POST message to Packet Timeline Dec 13 14:41:53.813576 ignition[1098]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:41:53.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.665690 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:41:53.669676 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:41:53.672849 systemd[1]: Stopped target slices.target. Dec 13 14:41:53.694818 systemd[1]: Stopped target sockets.target. Dec 13 14:41:53.711860 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:41:53.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:53.711950 systemd[1]: Closed iscsid.socket. Dec 13 14:41:53.725921 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:41:53.726035 systemd[1]: Closed iscsiuio.socket. Dec 13 14:41:53.740133 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:41:53.740439 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:41:53.758216 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:41:53.758587 systemd[1]: Stopped ignition-files.service. Dec 13 14:41:53.773235 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:41:53.773615 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:41:53.794380 systemd[1]: Stopping ignition-mount.service... Dec 13 14:41:53.806677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:41:53.806773 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:41:53.822234 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:41:53.841608 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:41:53.841750 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:41:53.862177 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:41:53.862452 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:41:53.895387 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:41:53.896267 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:41:53.896310 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:41:53.961628 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:41:53.961840 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:41:54.850816 ignition[1098]: INFO : GET result: OK Dec 13 14:41:55.163936 ignition[1098]: INFO : Ignition finished successfully Dec 13 14:41:55.165284 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:41:55.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.165527 systemd[1]: Stopped ignition-mount.service. Dec 13 14:41:55.183016 systemd[1]: Stopped target network.target. Dec 13 14:41:55.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.198765 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:41:55.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.198925 systemd[1]: Stopped ignition-disks.service. Dec 13 14:41:55.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.213808 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:41:55.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.213938 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:41:55.229876 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:41:55.230031 systemd[1]: Stopped ignition-setup.service. Dec 13 14:41:55.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.246863 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:41:55.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.327000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:41:55.247012 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:41:55.263157 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:41:55.269618 systemd-networkd[875]: enp1s0f0np0: DHCPv6 lease lost Dec 13 14:41:55.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.277922 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:41:55.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.283681 systemd-networkd[875]: enp1s0f1np1: DHCPv6 lease lost Dec 13 14:41:55.396000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:41:55.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.293363 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:41:55.293630 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:41:55.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.310571 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:41:55.310809 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:41:55.326814 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:41:55.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.326833 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:41:55.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.346124 systemd[1]: Stopping network-cleanup.service... Dec 13 14:41:55.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.352737 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:41:55.352774 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:41:55.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.372848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:41:55.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.372921 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:41:55.389049 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:41:55.389166 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:41:55.406196 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:41:55.424782 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:41:55.426402 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:41:55.426744 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:41:55.439202 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:41:55.439325 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:41:55.451799 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:41:55.451904 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:41:55.467713 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:41:55.467839 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:41:55.482846 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:41:55.482964 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:41:55.498822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:41:55.498959 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:41:55.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:55.515728 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:41:55.528638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:41:55.528793 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:41:55.547660 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:41:55.547898 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:41:55.796781 iscsid[898]: iscsid shutting down. Dec 13 14:41:55.698649 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:41:55.698882 systemd[1]: Stopped network-cleanup.service. Dec 13 14:41:55.714112 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:41:55.731474 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:41:55.753356 systemd[1]: Switching root. Dec 13 14:41:55.796932 systemd-journald[268]: Journal stopped Dec 13 14:41:59.836823 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Dec 13 14:41:59.836837 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:41:59.836845 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:41:59.836851 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:41:59.836856 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:41:59.836861 kernel: SELinux: policy capability open_perms=1 Dec 13 14:41:59.836867 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:41:59.836872 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:41:59.836877 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:41:59.836884 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:41:59.836889 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:41:59.836894 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:41:59.836899 systemd[1]: Successfully loaded SELinux policy in 321.099ms. Dec 13 14:41:59.836906 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.211ms. Dec 13 14:41:59.836914 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:41:59.836923 systemd[1]: Detected architecture x86-64. Dec 13 14:41:59.836929 systemd[1]: Detected first boot. Dec 13 14:41:59.836934 systemd[1]: Hostname set to . Dec 13 14:41:59.836941 systemd[1]: Initializing machine ID from random generator. Dec 13 14:41:59.836946 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:41:59.836952 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:41:59.836959 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:41:59.836966 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:41:59.836972 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:41:59.836978 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 14:41:59.836984 kernel: audit: type=1334 audit(1734100918.094:90): prog-id=12 op=LOAD Dec 13 14:41:59.836989 kernel: audit: type=1334 audit(1734100918.094:91): prog-id=3 op=UNLOAD Dec 13 14:41:59.836996 kernel: audit: type=1334 audit(1734100918.163:92): prog-id=13 op=LOAD Dec 13 14:41:59.837001 kernel: audit: type=1334 audit(1734100918.185:93): prog-id=14 op=LOAD Dec 13 14:41:59.837007 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:41:59.837013 kernel: audit: type=1334 audit(1734100918.185:94): prog-id=4 op=UNLOAD Dec 13 14:41:59.837018 systemd[1]: Stopped iscsiuio.service. Dec 13 14:41:59.837024 kernel: audit: type=1334 audit(1734100918.185:95): prog-id=5 op=UNLOAD Dec 13 14:41:59.837030 kernel: audit: type=1131 audit(1734100918.186:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.837035 kernel: audit: type=1334 audit(1734100918.346:97): prog-id=12 op=UNLOAD Dec 13 14:41:59.837041 kernel: audit: type=1131 audit(1734100918.353:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.837048 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:41:59.837054 systemd[1]: Stopped iscsid.service. Dec 13 14:41:59.837060 kernel: audit: type=1131 audit(1734100918.461:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.837066 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:41:59.837074 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:41:59.837080 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:41:59.837086 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:41:59.837093 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:41:59.837100 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:41:59.837106 systemd[1]: Created slice system-getty.slice. Dec 13 14:41:59.837112 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:41:59.837119 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:41:59.837125 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:41:59.837131 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:41:59.837138 systemd[1]: Created slice user.slice. Dec 13 14:41:59.837145 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:41:59.837151 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:41:59.837157 systemd[1]: Set up automount boot.automount. Dec 13 14:41:59.837163 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:41:59.837170 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:41:59.837176 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:41:59.837182 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:41:59.837188 systemd[1]: Reached target integritysetup.target. Dec 13 14:41:59.837195 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:41:59.837201 systemd[1]: Reached target remote-fs.target. Dec 13 14:41:59.837207 systemd[1]: Reached target slices.target. Dec 13 14:41:59.837214 systemd[1]: Reached target swap.target. Dec 13 14:41:59.837220 systemd[1]: Reached target torcx.target. Dec 13 14:41:59.837226 systemd[1]: Reached target veritysetup.target. Dec 13 14:41:59.837234 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:41:59.837240 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:41:59.837246 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:41:59.837253 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:41:59.837259 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:41:59.837266 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:41:59.837272 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:41:59.837279 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:41:59.837286 systemd[1]: Mounting media.mount... Dec 13 14:41:59.837293 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:41:59.837299 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:41:59.837305 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:41:59.837312 systemd[1]: Mounting tmp.mount... Dec 13 14:41:59.837318 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:41:59.837324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:41:59.837331 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:41:59.837337 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:41:59.837344 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:41:59.837350 systemd[1]: Starting modprobe@drm.service... Dec 13 14:41:59.837357 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:41:59.837363 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:41:59.837369 kernel: fuse: init (API version 7.34) Dec 13 14:41:59.837375 systemd[1]: Starting modprobe@loop.service... Dec 13 14:41:59.837382 kernel: loop: module loaded Dec 13 14:41:59.837388 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:41:59.837395 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:41:59.837402 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:41:59.837408 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:41:59.837415 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:41:59.837421 systemd[1]: Stopped systemd-journald.service. Dec 13 14:41:59.837427 systemd[1]: Starting systemd-journald.service... Dec 13 14:41:59.837434 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:41:59.837442 systemd-journald[1250]: Journal started Dec 13 14:41:59.837489 systemd-journald[1250]: Runtime Journal (/run/log/journal/72bf372d76a64b138384cb84ae458e7a) is 8.0M, max 640.1M, 632.1M free. Dec 13 14:41:56.210000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:41:56.479000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:41:56.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:41:56.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:41:56.481000 audit: BPF prog-id=10 op=LOAD Dec 13 14:41:56.481000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:41:56.481000 audit: BPF prog-id=11 op=LOAD Dec 13 14:41:56.481000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:41:56.544000 audit[1139]: AVC avc: denied { associate } for pid=1139 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:41:56.544000 audit[1139]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1122 pid=1139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:41:56.544000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:41:56.570000 audit[1139]: AVC avc: denied { associate } for pid=1139 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:41:56.570000 audit[1139]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=1122 pid=1139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:41:56.570000 audit: CWD cwd="/" Dec 13 14:41:56.570000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:41:56.570000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:41:56.570000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:41:58.094000 audit: BPF prog-id=12 op=LOAD Dec 13 14:41:58.094000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:41:58.163000 audit: BPF prog-id=13 op=LOAD Dec 13 14:41:58.185000 audit: BPF prog-id=14 op=LOAD Dec 13 14:41:58.185000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:41:58.185000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:41:58.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:58.346000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:41:58.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:58.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:58.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:58.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.809000 audit: BPF prog-id=15 op=LOAD Dec 13 14:41:59.810000 audit: BPF prog-id=16 op=LOAD Dec 13 14:41:59.810000 audit: BPF prog-id=17 op=LOAD Dec 13 14:41:59.810000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:41:59.810000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:41:59.834000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:41:59.834000 audit[1250]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd9b23b300 a2=4000 a3=7ffd9b23b39c items=0 ppid=1 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:41:59.834000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:41:58.093325 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:41:56.543050 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:41:58.186965 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:41:56.543634 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:41:56.543651 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:41:56.543674 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:41:56.543681 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:41:56.543704 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:41:56.543713 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:41:56.543848 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:41:56.543875 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:41:56.543884 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:41:56.544470 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:41:56.544495 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:41:56.544509 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:41:56.544520 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:41:56.544531 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:41:56.544541 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:41:57.737959 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:41:57.738102 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:41:57.738158 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:41:57.738253 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:41:57.738283 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:41:57.738316 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T14:41:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:41:59.868673 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:41:59.890475 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:41:59.911520 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:41:59.945012 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:41:59.945043 systemd[1]: Stopped verity-setup.service. Dec 13 14:41:59.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:41:59.979505 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:41:59.993530 systemd[1]: Started systemd-journald.service. Dec 13 14:42:00.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.002006 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:42:00.008713 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:42:00.015710 systemd[1]: Mounted media.mount. Dec 13 14:42:00.022711 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:42:00.031698 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:42:00.039680 systemd[1]: Mounted tmp.mount. Dec 13 14:42:00.046782 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:42:00.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.054825 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:42:00.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.063847 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:42:00.063956 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:42:00.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.072887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:42:00.073027 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:42:00.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.082032 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:42:00.082232 systemd[1]: Finished modprobe@drm.service. Dec 13 14:42:00.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.091138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:42:00.091386 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:42:00.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.100423 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:42:00.100844 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:42:00.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.111310 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:42:00.111671 systemd[1]: Finished modprobe@loop.service. Dec 13 14:42:00.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.120427 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:42:00.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.129277 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:42:00.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.139283 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:42:00.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.149336 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:42:00.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.159006 systemd[1]: Reached target network-pre.target. Dec 13 14:42:00.170426 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:42:00.181142 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:42:00.187715 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:42:00.191800 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:42:00.199273 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:42:00.202233 systemd-journald[1250]: Time spent on flushing to /var/log/journal/72bf372d76a64b138384cb84ae458e7a is 14.762ms for 1563 entries. Dec 13 14:42:00.202233 systemd-journald[1250]: System Journal (/var/log/journal/72bf372d76a64b138384cb84ae458e7a) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:42:00.243199 systemd-journald[1250]: Received client request to flush runtime journal. Dec 13 14:42:00.216616 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:42:00.217933 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:42:00.227524 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:42:00.228224 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:42:00.236327 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:42:00.244071 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:42:00.251677 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:42:00.259637 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:42:00.267691 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:42:00.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.276691 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:42:00.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.285699 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:42:00.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.293679 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:42:00.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.302692 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:42:00.310785 udevadm[1266]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:42:00.492035 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:42:00.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.500000 audit: BPF prog-id=18 op=LOAD Dec 13 14:42:00.501000 audit: BPF prog-id=19 op=LOAD Dec 13 14:42:00.501000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:42:00.501000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:42:00.501951 systemd[1]: Starting systemd-udevd.service... Dec 13 14:42:00.513431 systemd-udevd[1267]: Using default interface naming scheme 'v252'. Dec 13 14:42:00.528627 systemd[1]: Started systemd-udevd.service. Dec 13 14:42:00.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.538679 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Dec 13 14:42:00.538000 audit: BPF prog-id=20 op=LOAD Dec 13 14:42:00.539760 systemd[1]: Starting systemd-networkd.service... Dec 13 14:42:00.561000 audit: BPF prog-id=21 op=LOAD Dec 13 14:42:00.575080 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 14:42:00.575173 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 14:42:00.575200 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:42:00.575222 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1274) Dec 13 14:42:00.575242 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:42:00.609000 audit: BPF prog-id=22 op=LOAD Dec 13 14:42:00.626000 audit: BPF prog-id=23 op=LOAD Dec 13 14:42:00.627158 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:42:00.627467 kernel: IPMI message handler: version 39.2 Dec 13 14:42:00.641463 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:42:00.573000 audit[1284]: AVC avc: denied { confidentiality } for pid=1284 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:42:00.573000 audit[1284]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fd435e86010 a1=4d98c a2=7fd437b39bc5 a3=5 items=42 ppid=1267 pid=1284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:42:00.573000 audit: CWD cwd="/" Dec 13 14:42:00.573000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=1 name=(null) inode=12661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=2 name=(null) inode=12661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=3 name=(null) inode=12662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=4 name=(null) inode=12661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=5 name=(null) inode=12663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=6 name=(null) inode=12661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=7 name=(null) inode=12664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=8 name=(null) inode=12664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=9 name=(null) inode=12665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=10 name=(null) inode=12664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=11 name=(null) inode=12666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=12 name=(null) inode=12664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=13 name=(null) inode=12667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=14 name=(null) inode=12664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=15 name=(null) inode=12668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=16 name=(null) inode=12664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=17 name=(null) inode=12669 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=18 name=(null) inode=12661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=19 name=(null) inode=12670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=20 name=(null) inode=12670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=21 name=(null) inode=12671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=22 name=(null) inode=12670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=23 name=(null) inode=12672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=24 name=(null) inode=12670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=25 name=(null) inode=12673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=26 name=(null) inode=12670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=27 name=(null) inode=12674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=28 name=(null) inode=12670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=29 name=(null) inode=12675 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=30 name=(null) inode=12661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=31 name=(null) inode=12676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=32 name=(null) inode=12676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=33 name=(null) inode=12677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=34 name=(null) inode=12676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=35 name=(null) inode=12678 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=36 name=(null) inode=12676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=37 name=(null) inode=12679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=38 name=(null) inode=12676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=39 name=(null) inode=12680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=40 name=(null) inode=12676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PATH item=41 name=(null) inode=12681 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:42:00.573000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:42:00.672471 kernel: ipmi device interface Dec 13 14:42:00.695471 kernel: ipmi_si: IPMI System Interface driver Dec 13 14:42:00.695537 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 14:42:00.774990 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 14:42:00.775087 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 14:42:00.775163 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 14:42:00.775177 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 14:42:00.775189 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 14:42:00.910682 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 14:42:00.910761 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 14:42:00.910838 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 14:42:00.964321 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 14:42:00.964346 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 14:42:00.964474 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 14:42:00.964498 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Dec 13 14:42:00.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:00.698483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:42:00.818474 systemd[1]: Started systemd-userdbd.service. Dec 13 14:42:00.896194 systemd-networkd[1297]: bond0: netdev ready Dec 13 14:42:00.898592 systemd-networkd[1297]: lo: Link UP Dec 13 14:42:00.898595 systemd-networkd[1297]: lo: Gained carrier Dec 13 14:42:00.899155 systemd-networkd[1297]: Enumeration completed Dec 13 14:42:00.899218 systemd[1]: Started systemd-networkd.service. Dec 13 14:42:00.899487 systemd-networkd[1297]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 14:42:00.910590 systemd-networkd[1297]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bd.network. Dec 13 14:42:00.982462 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 14:42:00.982495 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 14:42:00.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:01.057148 kernel: intel_rapl_common: Found RAPL domain package Dec 13 14:42:01.057223 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Dec 13 14:42:01.057319 kernel: intel_rapl_common: Found RAPL domain core Dec 13 14:42:01.074471 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 14:42:01.091463 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 14:42:01.108463 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 14:42:01.146123 systemd-networkd[1297]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Dec 13 14:42:01.146518 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Dec 13 14:42:01.176462 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 14:42:01.176501 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Dec 13 14:42:01.213499 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Dec 13 14:42:01.213564 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 14:42:01.311490 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 14:42:01.333534 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Dec 13 14:42:01.353503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 14:42:01.353550 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 14:42:01.374230 systemd-networkd[1297]: bond0: Link UP Dec 13 14:42:01.374514 systemd-networkd[1297]: enp1s0f1np1: Link UP Dec 13 14:42:01.374703 systemd-networkd[1297]: enp1s0f1np1: Gained carrier Dec 13 14:42:01.376071 systemd-networkd[1297]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Dec 13 14:42:01.416535 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 14:42:01.416559 kernel: bond0: active interface up! Dec 13 14:42:01.438094 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Dec 13 14:42:01.454755 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:42:01.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:01.463218 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:42:01.478957 lvm[1371]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:42:01.503460 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 14:42:01.520926 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:42:01.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:01.529589 systemd[1]: Reached target cryptsetup.target. Dec 13 14:42:01.538105 systemd[1]: Starting lvm2-activation.service... Dec 13 14:42:01.540179 lvm[1372]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:42:01.567465 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.591459 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.614461 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.637460 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.637923 systemd[1]: Finished lvm2-activation.service. Dec 13 14:42:01.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:01.654605 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:42:01.661460 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.677541 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:42:01.677555 systemd[1]: Reached target local-fs.target. Dec 13 14:42:01.684460 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.700557 systemd[1]: Reached target machines.target. Dec 13 14:42:01.706460 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.723174 systemd[1]: Starting ldconfig.service... Dec 13 14:42:01.728460 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.743893 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:42:01.743915 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:42:01.744476 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:42:01.750461 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.765988 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:42:01.771461 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.771855 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:42:01.772394 systemd[1]: Starting systemd-sysext.service... Dec 13 14:42:01.772591 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1374 (bootctl) Dec 13 14:42:01.773141 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:42:01.785956 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:42:01.793501 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.815529 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.835505 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.835630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:42:01.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:01.835803 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:42:01.835883 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:42:01.856508 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.856539 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:42:01.871463 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.890704 systemd-networkd[1297]: enp1s0f0np0: Link UP Dec 13 14:42:01.890906 systemd-networkd[1297]: bond0: Gained carrier Dec 13 14:42:01.891009 systemd-networkd[1297]: enp1s0f0np0: Gained carrier Dec 13 14:42:01.925461 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 14:42:01.925519 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Dec 13 14:42:01.926463 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:42:01.929753 systemd-networkd[1297]: enp1s0f1np1: Link DOWN Dec 13 14:42:01.929757 systemd-networkd[1297]: enp1s0f1np1: Lost carrier Dec 13 14:42:01.941668 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:42:01.941995 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:42:01.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:01.953221 systemd-fsck[1383]: fsck.fat 4.2 (2021-01-31) Dec 13 14:42:01.953221 systemd-fsck[1383]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:42:01.953901 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:42:01.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:01.965365 systemd[1]: Mounting boot.mount... Dec 13 14:42:01.988463 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:42:01.990251 systemd[1]: Mounted boot.mount. Dec 13 14:42:02.004771 (sd-sysext)[1388]: Using extensions 'kubernetes'. Dec 13 14:42:02.004953 (sd-sysext)[1388]: Merged extensions into '/usr'. Dec 13 14:42:02.008635 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:42:02.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.023554 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:42:02.024481 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:42:02.031741 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.032729 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:42:02.040477 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:42:02.048769 systemd[1]: Starting modprobe@loop.service... Dec 13 14:42:02.055683 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.055905 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:42:02.056123 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:42:02.061621 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:42:02.076325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:42:02.076625 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:42:02.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.085624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:42:02.085899 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:42:02.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.095615 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:42:02.095897 systemd[1]: Finished modprobe@loop.service. Dec 13 14:42:02.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.105678 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:42:02.105942 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.108011 systemd[1]: Finished systemd-sysext.service. Dec 13 14:42:02.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.124018 systemd[1]: Starting ensure-sysext.service... Dec 13 14:42:02.134496 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 14:42:02.148917 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:42:02.154468 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Dec 13 14:42:02.154524 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Dec 13 14:42:02.155823 systemd-networkd[1297]: enp1s0f1np1: Link UP Dec 13 14:42:02.156116 systemd-networkd[1297]: enp1s0f1np1: Gained carrier Dec 13 14:42:02.180808 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:42:02.183843 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:42:02.184911 ldconfig[1373]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:42:02.187754 systemd[1]: Reloading. Dec 13 14:42:02.189479 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 14:42:02.191242 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:42:02.207088 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:42:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:42:02.207114 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:42:02Z" level=info msg="torcx already run" Dec 13 14:42:02.257847 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:42:02.257855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:42:02.269030 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:42:02.310000 audit: BPF prog-id=24 op=LOAD Dec 13 14:42:02.310000 audit: BPF prog-id=25 op=LOAD Dec 13 14:42:02.310000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:42:02.310000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:42:02.310000 audit: BPF prog-id=26 op=LOAD Dec 13 14:42:02.310000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:42:02.311000 audit: BPF prog-id=27 op=LOAD Dec 13 14:42:02.311000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:42:02.311000 audit: BPF prog-id=28 op=LOAD Dec 13 14:42:02.311000 audit: BPF prog-id=29 op=LOAD Dec 13 14:42:02.311000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:42:02.311000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:42:02.312000 audit: BPF prog-id=30 op=LOAD Dec 13 14:42:02.312000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:42:02.313000 audit: BPF prog-id=31 op=LOAD Dec 13 14:42:02.313000 audit: BPF prog-id=32 op=LOAD Dec 13 14:42:02.313000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:42:02.313000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:42:02.315153 systemd[1]: Finished ldconfig.service. Dec 13 14:42:02.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.322097 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:42:02.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:42:02.332929 systemd[1]: Starting audit-rules.service... Dec 13 14:42:02.341112 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:42:02.350175 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:42:02.350000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:42:02.350000 audit[1492]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff64e6e180 a2=420 a3=0 items=0 ppid=1476 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:42:02.350000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:42:02.351088 augenrules[1492]: No rules Dec 13 14:42:02.359534 systemd[1]: Starting systemd-resolved.service... Dec 13 14:42:02.368539 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:42:02.376081 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:42:02.383130 systemd[1]: Finished audit-rules.service. Dec 13 14:42:02.389773 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:42:02.397767 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:42:02.410043 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.410674 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:42:02.418075 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:42:02.425072 systemd[1]: Starting modprobe@loop.service... Dec 13 14:42:02.431517 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.431600 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:42:02.432278 systemd[1]: Starting systemd-update-done.service... Dec 13 14:42:02.438516 systemd-resolved[1498]: Positive Trust Anchors: Dec 13 14:42:02.438523 systemd-resolved[1498]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:42:02.438543 systemd-resolved[1498]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:42:02.438553 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:42:02.439215 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:42:02.442486 systemd-resolved[1498]: Using system hostname 'ci-3510.3.6-a-c5d7845087'. Dec 13 14:42:02.447814 systemd[1]: Started systemd-resolved.service. Dec 13 14:42:02.455795 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:42:02.463777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:42:02.463840 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:42:02.472789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:42:02.472852 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:42:02.480762 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:42:02.480822 systemd[1]: Finished modprobe@loop.service. Dec 13 14:42:02.488750 systemd[1]: Finished systemd-update-done.service. Dec 13 14:42:02.498729 systemd[1]: Reached target network.target. Dec 13 14:42:02.506593 systemd[1]: Reached target nss-lookup.target. Dec 13 14:42:02.514593 systemd[1]: Reached target time-set.target. Dec 13 14:42:02.522684 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.523334 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:42:02.531096 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:42:02.538048 systemd[1]: Starting modprobe@loop.service... Dec 13 14:42:02.544563 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.544629 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:42:02.544688 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:42:02.545203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:42:02.545268 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:42:02.553736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:42:02.553795 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:42:02.561722 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:42:02.561782 systemd[1]: Finished modprobe@loop.service. Dec 13 14:42:02.569720 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:42:02.569794 systemd[1]: Reached target sysinit.target. Dec 13 14:42:02.577621 systemd[1]: Started motdgen.path. Dec 13 14:42:02.584600 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:42:02.594666 systemd[1]: Started logrotate.timer. Dec 13 14:42:02.601624 systemd[1]: Started mdadm.timer. Dec 13 14:42:02.608598 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:42:02.616556 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:42:02.616619 systemd[1]: Reached target paths.target. Dec 13 14:42:02.623574 systemd[1]: Reached target timers.target. Dec 13 14:42:02.630709 systemd[1]: Listening on dbus.socket. Dec 13 14:42:02.638042 systemd[1]: Starting docker.socket... Dec 13 14:42:02.645973 systemd[1]: Listening on sshd.socket. Dec 13 14:42:02.652618 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:42:02.652685 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.653383 systemd[1]: Listening on docker.socket. Dec 13 14:42:02.661396 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:42:02.661460 systemd[1]: Reached target sockets.target. Dec 13 14:42:02.669596 systemd[1]: Reached target basic.target. Dec 13 14:42:02.676571 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.676625 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:42:02.677161 systemd[1]: Starting containerd.service... Dec 13 14:42:02.684014 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:42:02.693159 systemd[1]: Starting coreos-metadata.service... Dec 13 14:42:02.700107 systemd[1]: Starting dbus.service... Dec 13 14:42:02.706098 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:42:02.710120 jq[1519]: false Dec 13 14:42:02.712926 coreos-metadata[1512]: Dec 13 14:42:02.712 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:42:02.713247 systemd[1]: Starting extend-filesystems.service... Dec 13 14:42:02.718377 dbus-daemon[1518]: [system] SELinux support is enabled Dec 13 14:42:02.719584 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:42:02.720324 systemd[1]: Starting modprobe@drm.service... Dec 13 14:42:02.721142 extend-filesystems[1521]: Found loop1 Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda1 Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda2 Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda3 Dec 13 14:42:02.743642 extend-filesystems[1521]: Found usr Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda4 Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda6 Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda7 Dec 13 14:42:02.743642 extend-filesystems[1521]: Found sda9 Dec 13 14:42:02.743642 extend-filesystems[1521]: Checking size of /dev/sda9 Dec 13 14:42:02.743642 extend-filesystems[1521]: Resized partition /dev/sda9 Dec 13 14:42:02.858579 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Dec 13 14:42:02.858601 coreos-metadata[1515]: Dec 13 14:42:02.723 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:42:02.729335 systemd[1]: Starting motdgen.service... Dec 13 14:42:02.858758 extend-filesystems[1530]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:42:02.755349 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:42:02.762224 systemd[1]: Starting sshd-keygen.service... Dec 13 14:42:02.781204 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:42:02.799544 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:42:02.800159 systemd[1]: Starting tcsd.service... Dec 13 14:42:02.878867 update_engine[1549]: I1213 14:42:02.863184 1549 main.cc:92] Flatcar Update Engine starting Dec 13 14:42:02.878867 update_engine[1549]: I1213 14:42:02.866431 1549 update_check_scheduler.cc:74] Next update check in 4m48s Dec 13 14:42:02.817851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:42:02.879036 jq[1550]: true Dec 13 14:42:02.818223 systemd[1]: Starting update-engine.service... Dec 13 14:42:02.836104 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:42:02.852460 systemd[1]: Started dbus.service. Dec 13 14:42:02.872265 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:42:02.872350 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:42:02.872567 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:42:02.872628 systemd[1]: Finished modprobe@drm.service. Dec 13 14:42:02.886855 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:42:02.886933 systemd[1]: Finished motdgen.service. Dec 13 14:42:02.893724 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:42:02.893799 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:42:02.904211 jq[1552]: true Dec 13 14:42:02.904625 systemd[1]: Finished ensure-sysext.service. Dec 13 14:42:02.912742 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 14:42:02.912851 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 14:42:02.913554 env[1553]: time="2024-12-13T14:42:02.913526455Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:42:02.920540 systemd-networkd[1297]: bond0: Gained IPv6LL Dec 13 14:42:02.921869 env[1553]: time="2024-12-13T14:42:02.921849688Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:42:02.921936 env[1553]: time="2024-12-13T14:42:02.921925721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922514 env[1553]: time="2024-12-13T14:42:02.922497702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922514 env[1553]: time="2024-12-13T14:42:02.922511490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922629 env[1553]: time="2024-12-13T14:42:02.922617799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922629 env[1553]: time="2024-12-13T14:42:02.922628137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922694 env[1553]: time="2024-12-13T14:42:02.922635402Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:42:02.922694 env[1553]: time="2024-12-13T14:42:02.922641113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922694 env[1553]: time="2024-12-13T14:42:02.922679227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922812 env[1553]: time="2024-12-13T14:42:02.922802007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922883 env[1553]: time="2024-12-13T14:42:02.922872305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:42:02.922883 env[1553]: time="2024-12-13T14:42:02.922881954Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:42:02.922940 env[1553]: time="2024-12-13T14:42:02.922907412Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:42:02.922940 env[1553]: time="2024-12-13T14:42:02.922916234Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:42:02.924588 systemd[1]: Started update-engine.service. Dec 13 14:42:02.932757 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:42:02.933761 systemd[1]: Started locksmithd.service. Dec 13 14:42:02.940577 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:42:02.940599 systemd[1]: Reached target system-config.target. Dec 13 14:42:02.947621 env[1553]: time="2024-12-13T14:42:02.947608354Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:42:02.947650 env[1553]: time="2024-12-13T14:42:02.947625634Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:42:02.947650 env[1553]: time="2024-12-13T14:42:02.947633655Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:42:02.947681 env[1553]: time="2024-12-13T14:42:02.947649092Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.947681 env[1553]: time="2024-12-13T14:42:02.947657075Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.947681 env[1553]: time="2024-12-13T14:42:02.947665429Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.947681 env[1553]: time="2024-12-13T14:42:02.947672797Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.953398 env[1553]: time="2024-12-13T14:42:02.947681310Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.953398 env[1553]: time="2024-12-13T14:42:02.947688899Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.953398 env[1553]: time="2024-12-13T14:42:02.947698066Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.953398 env[1553]: time="2024-12-13T14:42:02.947705438Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.953398 env[1553]: time="2024-12-13T14:42:02.947715168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:42:02.953398 env[1553]: time="2024-12-13T14:42:02.953336978Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:42:02.953398 env[1553]: time="2024-12-13T14:42:02.953388453Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:42:02.953559 env[1553]: time="2024-12-13T14:42:02.953535382Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:42:02.953559 env[1553]: time="2024-12-13T14:42:02.953551788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953610 env[1553]: time="2024-12-13T14:42:02.953559959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:42:02.953610 env[1553]: time="2024-12-13T14:42:02.953588105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953610 env[1553]: time="2024-12-13T14:42:02.953595693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953610 env[1553]: time="2024-12-13T14:42:02.953602480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953610 env[1553]: time="2024-12-13T14:42:02.953610316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953617726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953625155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953631570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953637566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953645826Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953713847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953723266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953729966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953735927Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:42:02.953746 env[1553]: time="2024-12-13T14:42:02.953744015Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:42:02.953703 systemd[1]: Starting systemd-logind.service... Dec 13 14:42:02.953944 env[1553]: time="2024-12-13T14:42:02.953750261Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:42:02.953944 env[1553]: time="2024-12-13T14:42:02.953761431Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:42:02.953944 env[1553]: time="2024-12-13T14:42:02.953782214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:42:02.954028 env[1553]: time="2024-12-13T14:42:02.953896159Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:42:02.954028 env[1553]: time="2024-12-13T14:42:02.953929433Z" level=info msg="Connect containerd service" Dec 13 14:42:02.954028 env[1553]: time="2024-12-13T14:42:02.953947083Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954216419Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954312880Z" level=info msg="Start subscribing containerd event" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954346305Z" level=info msg="Start recovering state" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954347843Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954379999Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954390565Z" level=info msg="Start event monitor" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954403918Z" level=info msg="Start snapshots syncer" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954412587Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954420782Z" level=info msg="Start streaming server" Dec 13 14:42:02.960334 env[1553]: time="2024-12-13T14:42:02.954413634Z" level=info msg="containerd successfully booted in 0.041281s" Dec 13 14:42:02.960552 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:42:02.960581 systemd[1]: Reached target user-config.target. Dec 13 14:42:02.963879 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:42:02.968501 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:42:02.968676 systemd[1]: Started containerd.service. Dec 13 14:42:02.975716 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:42:02.976413 systemd-logind[1588]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 14:42:02.976424 systemd-logind[1588]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:42:02.976434 systemd-logind[1588]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 14:42:02.976584 systemd-logind[1588]: New seat seat0. Dec 13 14:42:02.985785 systemd[1]: Started systemd-logind.service. Dec 13 14:42:03.000609 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:42:03.021772 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:42:03.033391 systemd[1]: Finished sshd-keygen.service. Dec 13 14:42:03.042383 systemd[1]: Starting issuegen.service... Dec 13 14:42:03.049692 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:42:03.049796 systemd[1]: Finished issuegen.service. Dec 13 14:42:03.058266 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:42:03.066724 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:42:03.076227 systemd[1]: Started getty@tty1.service. Dec 13 14:42:03.084175 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 14:42:03.092603 systemd[1]: Reached target getty.target. Dec 13 14:42:03.477497 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 14:42:03.557525 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:1 Dec 13 14:42:03.754989 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:42:03.764904 systemd[1]: Reached target network-online.target. Dec 13 14:42:03.774734 systemd[1]: Starting kubelet.service... Dec 13 14:42:03.835501 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Dec 13 14:42:03.867524 extend-filesystems[1530]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:42:03.867524 extend-filesystems[1530]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 14:42:03.867524 extend-filesystems[1530]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Dec 13 14:42:03.907553 extend-filesystems[1521]: Resized filesystem in /dev/sda9 Dec 13 14:42:03.907553 extend-filesystems[1521]: Found sdb Dec 13 14:42:03.868071 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:42:03.868160 systemd[1]: Finished extend-filesystems.service. Dec 13 14:42:04.392498 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Dec 13 14:42:04.614292 systemd[1]: Started kubelet.service. Dec 13 14:42:05.171438 kubelet[1623]: E1213 14:42:05.171384 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:42:05.172547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:42:05.172634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:42:08.103738 login[1612]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:42:08.112021 systemd-logind[1588]: New session 1 of user core. Dec 13 14:42:08.112390 login[1611]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:42:08.112498 systemd[1]: Created slice user-500.slice. Dec 13 14:42:08.113126 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:42:08.115167 systemd-logind[1588]: New session 2 of user core. Dec 13 14:42:08.118446 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:42:08.119124 systemd[1]: Starting user@500.service... Dec 13 14:42:08.121118 (systemd)[1640]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:08.188093 systemd[1640]: Queued start job for default target default.target. Dec 13 14:42:08.188374 systemd[1640]: Reached target paths.target. Dec 13 14:42:08.188385 systemd[1640]: Reached target sockets.target. Dec 13 14:42:08.188393 systemd[1640]: Reached target timers.target. Dec 13 14:42:08.188399 systemd[1640]: Reached target basic.target. Dec 13 14:42:08.188435 systemd[1640]: Reached target default.target. Dec 13 14:42:08.188449 systemd[1640]: Startup finished in 64ms. Dec 13 14:42:08.188481 systemd[1]: Started user@500.service. Dec 13 14:42:08.189036 systemd[1]: Started session-1.scope. Dec 13 14:42:08.189434 systemd[1]: Started session-2.scope. Dec 13 14:42:08.617844 coreos-metadata[1515]: Dec 13 14:42:08.617 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 14:42:08.618660 coreos-metadata[1512]: Dec 13 14:42:08.617 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 14:42:09.583496 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Dec 13 14:42:09.583666 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Dec 13 14:42:09.618068 coreos-metadata[1512]: Dec 13 14:42:09.618 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 14:42:09.618186 coreos-metadata[1515]: Dec 13 14:42:09.618 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 14:42:10.164181 systemd[1]: Created slice system-sshd.slice. Dec 13 14:42:10.164908 systemd[1]: Started sshd@0-145.40.90.151:22-139.178.89.65:34710.service. Dec 13 14:42:10.213011 sshd[1661]: Accepted publickey for core from 139.178.89.65 port 34710 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 14:42:10.214532 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:10.219374 systemd-logind[1588]: New session 3 of user core. Dec 13 14:42:10.220616 systemd[1]: Started session-3.scope. Dec 13 14:42:10.280121 systemd[1]: Started sshd@1-145.40.90.151:22-139.178.89.65:34714.service. Dec 13 14:42:10.310257 sshd[1666]: Accepted publickey for core from 139.178.89.65 port 34714 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 14:42:10.310983 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:10.313090 systemd-logind[1588]: New session 4 of user core. Dec 13 14:42:10.313607 systemd[1]: Started session-4.scope. Dec 13 14:42:10.363836 sshd[1666]: pam_unix(sshd:session): session closed for user core Dec 13 14:42:10.366035 systemd[1]: sshd@1-145.40.90.151:22-139.178.89.65:34714.service: Deactivated successfully. Dec 13 14:42:10.366495 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:42:10.366966 systemd-logind[1588]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:42:10.367786 systemd[1]: Started sshd@2-145.40.90.151:22-139.178.89.65:34724.service. Dec 13 14:42:10.368433 systemd-logind[1588]: Removed session 4. Dec 13 14:42:10.400463 sshd[1672]: Accepted publickey for core from 139.178.89.65 port 34724 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 14:42:10.401332 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:10.404297 systemd-logind[1588]: New session 5 of user core. Dec 13 14:42:10.404951 systemd[1]: Started session-5.scope. Dec 13 14:42:10.460205 sshd[1672]: pam_unix(sshd:session): session closed for user core Dec 13 14:42:10.461413 systemd[1]: sshd@2-145.40.90.151:22-139.178.89.65:34724.service: Deactivated successfully. Dec 13 14:42:10.461827 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:42:10.462187 systemd-logind[1588]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:42:10.462875 systemd-logind[1588]: Removed session 5. Dec 13 14:42:10.721143 coreos-metadata[1515]: Dec 13 14:42:10.720 INFO Fetch successful Dec 13 14:42:10.752407 systemd[1]: Finished coreos-metadata.service. Dec 13 14:42:10.753219 systemd[1]: Started packet-phone-home.service. Dec 13 14:42:10.759026 curl[1680]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 14:42:10.759161 curl[1680]: Dload Upload Total Spent Left Speed Dec 13 14:42:11.072688 curl[1680]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 14:42:11.075108 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 14:42:11.098186 coreos-metadata[1512]: Dec 13 14:42:11.098 INFO Fetch successful Dec 13 14:42:11.178874 unknown[1512]: wrote ssh authorized keys file for user: core Dec 13 14:42:11.190713 update-ssh-keys[1681]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:42:11.190929 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:42:11.191111 systemd[1]: Reached target multi-user.target. Dec 13 14:42:11.191782 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:42:11.195823 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:42:11.195896 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:42:11.196043 systemd[1]: Startup finished in 1.914s (kernel) + 22.029s (initrd) + 15.324s (userspace) = 39.268s. Dec 13 14:42:15.082249 systemd[1]: Started sshd@3-145.40.90.151:22-218.92.0.157:54820.service. Dec 13 14:42:15.298995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:42:15.299561 systemd[1]: Stopped kubelet.service. Dec 13 14:42:15.302919 systemd[1]: Starting kubelet.service... Dec 13 14:42:15.510910 systemd[1]: Started kubelet.service. Dec 13 14:42:15.535001 kubelet[1690]: E1213 14:42:15.534935 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:42:15.536850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:42:15.536922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:42:16.070808 sshd[1684]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 14:42:18.318377 sshd[1684]: Failed password for root from 218.92.0.157 port 54820 ssh2 Dec 13 14:42:20.297600 systemd-timesyncd[1499]: Timed out waiting for reply from 173.208.172.164:123 (0.flatcar.pool.ntp.org). Dec 13 14:42:20.365351 systemd-timesyncd[1499]: Contacted time server 108.61.73.244:123 (0.flatcar.pool.ntp.org). Dec 13 14:42:20.365525 systemd-timesyncd[1499]: Initial clock synchronization to Fri 2024-12-13 14:42:20.748160 UTC. Dec 13 14:42:20.470230 systemd[1]: Started sshd@4-145.40.90.151:22-139.178.89.65:50796.service. Dec 13 14:42:20.500867 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 50796 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 14:42:20.501794 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:20.504698 systemd-logind[1588]: New session 6 of user core. Dec 13 14:42:20.505381 systemd[1]: Started session-6.scope. Dec 13 14:42:20.560074 sshd[1708]: pam_unix(sshd:session): session closed for user core Dec 13 14:42:20.561816 systemd[1]: sshd@4-145.40.90.151:22-139.178.89.65:50796.service: Deactivated successfully. Dec 13 14:42:20.562150 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:42:20.562418 systemd-logind[1588]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:42:20.562940 systemd[1]: Started sshd@5-145.40.90.151:22-139.178.89.65:50802.service. Dec 13 14:42:20.563355 systemd-logind[1588]: Removed session 6. Dec 13 14:42:20.593349 sshd[1714]: Accepted publickey for core from 139.178.89.65 port 50802 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 14:42:20.594378 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:20.597419 systemd-logind[1588]: New session 7 of user core. Dec 13 14:42:20.598196 systemd[1]: Started session-7.scope. Dec 13 14:42:20.651723 sshd[1714]: pam_unix(sshd:session): session closed for user core Dec 13 14:42:20.653390 systemd[1]: sshd@5-145.40.90.151:22-139.178.89.65:50802.service: Deactivated successfully. Dec 13 14:42:20.653731 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:42:20.654040 systemd-logind[1588]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:42:20.654594 systemd[1]: Started sshd@6-145.40.90.151:22-139.178.89.65:50810.service. Dec 13 14:42:20.655007 systemd-logind[1588]: Removed session 7. Dec 13 14:42:20.685568 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 50810 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 14:42:20.686603 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:20.689959 systemd-logind[1588]: New session 8 of user core. Dec 13 14:42:20.690771 systemd[1]: Started session-8.scope. Dec 13 14:42:20.758880 sshd[1721]: pam_unix(sshd:session): session closed for user core Dec 13 14:42:20.765495 systemd[1]: sshd@6-145.40.90.151:22-139.178.89.65:50810.service: Deactivated successfully. Dec 13 14:42:20.767085 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:42:20.768892 systemd-logind[1588]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:42:20.771546 systemd[1]: Started sshd@7-145.40.90.151:22-139.178.89.65:50826.service. Dec 13 14:42:20.774032 systemd-logind[1588]: Removed session 8. Dec 13 14:42:20.805615 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 50826 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 14:42:20.806443 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:42:20.809059 systemd-logind[1588]: New session 9 of user core. Dec 13 14:42:20.809679 systemd[1]: Started session-9.scope. Dec 13 14:42:20.874242 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:42:20.874382 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:42:21.502864 systemd[1]: Stopped kubelet.service. Dec 13 14:42:21.504213 systemd[1]: Starting kubelet.service... Dec 13 14:42:21.518287 systemd[1]: Reloading. Dec 13 14:42:21.548371 /usr/lib/systemd/system-generators/torcx-generator[1809]: time="2024-12-13T14:42:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:42:21.548393 /usr/lib/systemd/system-generators/torcx-generator[1809]: time="2024-12-13T14:42:21Z" level=info msg="torcx already run" Dec 13 14:42:21.604377 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:42:21.604385 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:42:21.617797 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:42:21.710932 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:42:21.710973 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:42:21.711074 systemd[1]: Stopped kubelet.service. Dec 13 14:42:21.711936 systemd[1]: Starting kubelet.service... Dec 13 14:42:21.920689 systemd[1]: Started kubelet.service. Dec 13 14:42:21.948402 kubelet[1874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:42:21.948402 kubelet[1874]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:42:21.948402 kubelet[1874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:42:21.948680 kubelet[1874]: I1213 14:42:21.948414 1874 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:42:22.281393 kubelet[1874]: I1213 14:42:22.281319 1874 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:42:22.281393 kubelet[1874]: I1213 14:42:22.281334 1874 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:42:22.281601 kubelet[1874]: I1213 14:42:22.281594 1874 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:42:22.303661 kubelet[1874]: I1213 14:42:22.303629 1874 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:42:22.309191 kubelet[1874]: E1213 14:42:22.309155 1874 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:42:22.309191 kubelet[1874]: I1213 14:42:22.309170 1874 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:42:22.332727 kubelet[1874]: I1213 14:42:22.332677 1874 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:42:22.333907 kubelet[1874]: I1213 14:42:22.333862 1874 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:42:22.333974 kubelet[1874]: I1213 14:42:22.333956 1874 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:42:22.334142 kubelet[1874]: I1213 14:42:22.333977 1874 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.67.80.13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:42:22.334142 kubelet[1874]: I1213 14:42:22.334120 1874 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:42:22.334142 kubelet[1874]: I1213 14:42:22.334128 1874 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:42:22.334311 kubelet[1874]: I1213 14:42:22.334191 1874 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:42:22.338247 kubelet[1874]: I1213 14:42:22.338202 1874 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:42:22.338247 kubelet[1874]: I1213 14:42:22.338222 1874 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:42:22.338247 kubelet[1874]: I1213 14:42:22.338248 1874 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:42:22.338343 kubelet[1874]: I1213 14:42:22.338260 1874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:42:22.338343 kubelet[1874]: E1213 14:42:22.338272 1874 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:22.338343 kubelet[1874]: E1213 14:42:22.338329 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:22.350844 kubelet[1874]: I1213 14:42:22.350795 1874 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:42:22.353119 kubelet[1874]: I1213 14:42:22.353098 1874 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:42:22.353184 kubelet[1874]: W1213 14:42:22.353152 1874 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:42:22.353659 kubelet[1874]: I1213 14:42:22.353619 1874 server.go:1269] "Started kubelet" Dec 13 14:42:22.353855 kubelet[1874]: I1213 14:42:22.353777 1874 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:42:22.353855 kubelet[1874]: I1213 14:42:22.353790 1874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:42:22.354151 kubelet[1874]: I1213 14:42:22.354104 1874 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:42:22.355178 kubelet[1874]: I1213 14:42:22.355162 1874 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:42:22.355260 kubelet[1874]: E1213 14:42:22.355221 1874 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:42:22.364593 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:42:22.364677 kubelet[1874]: I1213 14:42:22.364636 1874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:42:22.364677 kubelet[1874]: I1213 14:42:22.364668 1874 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:42:22.364802 kubelet[1874]: I1213 14:42:22.364726 1874 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:42:22.364802 kubelet[1874]: E1213 14:42:22.364769 1874 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.80.13\" not found" Dec 13 14:42:22.364924 kubelet[1874]: I1213 14:42:22.364804 1874 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:42:22.364924 kubelet[1874]: I1213 14:42:22.364842 1874 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:42:22.365196 kubelet[1874]: I1213 14:42:22.365155 1874 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:42:22.365262 kubelet[1874]: I1213 14:42:22.365248 1874 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:42:22.366163 kubelet[1874]: I1213 14:42:22.366145 1874 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:42:22.371019 kubelet[1874]: E1213 14:42:22.370968 1874 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.13\" not found" node="10.67.80.13" Dec 13 14:42:22.376129 kubelet[1874]: I1213 14:42:22.376083 1874 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:42:22.376129 kubelet[1874]: I1213 14:42:22.376106 1874 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:42:22.376228 kubelet[1874]: I1213 14:42:22.376135 1874 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:42:22.377631 kubelet[1874]: I1213 14:42:22.377621 1874 policy_none.go:49] "None policy: Start" Dec 13 14:42:22.377881 kubelet[1874]: I1213 14:42:22.377871 1874 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:42:22.377918 kubelet[1874]: I1213 14:42:22.377886 1874 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:42:22.381281 systemd[1]: Created slice kubepods.slice. Dec 13 14:42:22.383404 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:42:22.384888 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:42:22.399032 kubelet[1874]: I1213 14:42:22.398976 1874 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:42:22.399083 kubelet[1874]: I1213 14:42:22.399061 1874 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:42:22.399083 kubelet[1874]: I1213 14:42:22.399068 1874 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:42:22.399193 kubelet[1874]: I1213 14:42:22.399144 1874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:42:22.399717 kubelet[1874]: E1213 14:42:22.399709 1874 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.13\" not found" Dec 13 14:42:22.482244 kubelet[1874]: I1213 14:42:22.482216 1874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:42:22.483048 kubelet[1874]: I1213 14:42:22.483033 1874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:42:22.483088 kubelet[1874]: I1213 14:42:22.483052 1874 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:42:22.483088 kubelet[1874]: I1213 14:42:22.483065 1874 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:42:22.483161 kubelet[1874]: E1213 14:42:22.483097 1874 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:42:22.499953 kubelet[1874]: I1213 14:42:22.499938 1874 kubelet_node_status.go:72] "Attempting to register node" node="10.67.80.13" Dec 13 14:42:22.506903 kubelet[1874]: I1213 14:42:22.506862 1874 kubelet_node_status.go:75] "Successfully registered node" node="10.67.80.13" Dec 13 14:42:22.519036 kubelet[1874]: I1213 14:42:22.518956 1874 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:42:22.519823 env[1553]: time="2024-12-13T14:42:22.519707612Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:42:22.520543 kubelet[1874]: I1213 14:42:22.520162 1874 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:42:22.826642 sshd[1684]: Failed password for root from 218.92.0.157 port 54820 ssh2 Dec 13 14:42:22.948086 sudo[1730]: pam_unix(sudo:session): session closed for user root Dec 13 14:42:22.953000 sshd[1727]: pam_unix(sshd:session): session closed for user core Dec 13 14:42:22.959076 systemd[1]: sshd@7-145.40.90.151:22-139.178.89.65:50826.service: Deactivated successfully. Dec 13 14:42:22.960870 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:42:22.962862 systemd-logind[1588]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:42:22.965148 systemd-logind[1588]: Removed session 9. Dec 13 14:42:23.284306 kubelet[1874]: I1213 14:42:23.284062 1874 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:42:23.285166 kubelet[1874]: W1213 14:42:23.284518 1874 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:42:23.285166 kubelet[1874]: W1213 14:42:23.284545 1874 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:42:23.285166 kubelet[1874]: W1213 14:42:23.284582 1874 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:42:23.338930 kubelet[1874]: E1213 14:42:23.338771 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:23.338930 kubelet[1874]: I1213 14:42:23.338939 1874 apiserver.go:52] "Watching apiserver" Dec 13 14:42:23.365093 systemd[1]: Created slice kubepods-besteffort-pod1b86c8af_1860_40d7_9f78_0541d7e1784f.slice. Dec 13 14:42:23.366314 kubelet[1874]: I1213 14:42:23.366231 1874 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:42:23.369899 kubelet[1874]: I1213 14:42:23.369796 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hubble-tls\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.369899 kubelet[1874]: I1213 14:42:23.369883 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b86c8af-1860-40d7-9f78-0541d7e1784f-kube-proxy\") pod \"kube-proxy-v4w6d\" (UID: \"1b86c8af-1860-40d7-9f78-0541d7e1784f\") " pod="kube-system/kube-proxy-v4w6d" Dec 13 14:42:23.370257 kubelet[1874]: I1213 14:42:23.369940 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b86c8af-1860-40d7-9f78-0541d7e1784f-xtables-lock\") pod \"kube-proxy-v4w6d\" (UID: \"1b86c8af-1860-40d7-9f78-0541d7e1784f\") " pod="kube-system/kube-proxy-v4w6d" Dec 13 14:42:23.370257 kubelet[1874]: I1213 14:42:23.369989 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b86c8af-1860-40d7-9f78-0541d7e1784f-lib-modules\") pod \"kube-proxy-v4w6d\" (UID: \"1b86c8af-1860-40d7-9f78-0541d7e1784f\") " pod="kube-system/kube-proxy-v4w6d" Dec 13 14:42:23.370257 kubelet[1874]: I1213 14:42:23.370036 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-cgroup\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.370257 kubelet[1874]: I1213 14:42:23.370179 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-clustermesh-secrets\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.370699 kubelet[1874]: I1213 14:42:23.370356 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-kernel\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.370699 kubelet[1874]: I1213 14:42:23.370417 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-etc-cni-netd\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.370699 kubelet[1874]: I1213 14:42:23.370467 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-xtables-lock\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.370699 kubelet[1874]: I1213 14:42:23.370551 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-config-path\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.370699 kubelet[1874]: I1213 14:42:23.370601 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s92tp\" (UniqueName: \"kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-kube-api-access-s92tp\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.371180 kubelet[1874]: I1213 14:42:23.370667 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz6hp\" (UniqueName: \"kubernetes.io/projected/1b86c8af-1860-40d7-9f78-0541d7e1784f-kube-api-access-fz6hp\") pod \"kube-proxy-v4w6d\" (UID: \"1b86c8af-1860-40d7-9f78-0541d7e1784f\") " pod="kube-system/kube-proxy-v4w6d" Dec 13 14:42:23.371180 kubelet[1874]: I1213 14:42:23.370715 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-bpf-maps\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.371180 kubelet[1874]: I1213 14:42:23.370782 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hostproc\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.371180 kubelet[1874]: I1213 14:42:23.370867 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cni-path\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.371180 kubelet[1874]: I1213 14:42:23.370936 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-run\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.371180 kubelet[1874]: I1213 14:42:23.371010 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-lib-modules\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.371773 kubelet[1874]: I1213 14:42:23.371096 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-net\") pod \"cilium-mb5sk\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " pod="kube-system/cilium-mb5sk" Dec 13 14:42:23.393544 systemd[1]: Created slice kubepods-burstable-pod86cb27c5_4db1_4a9a_ac10_7a4cc652124c.slice. Dec 13 14:42:23.473364 kubelet[1874]: I1213 14:42:23.473244 1874 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:42:23.689424 env[1553]: time="2024-12-13T14:42:23.689329371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4w6d,Uid:1b86c8af-1860-40d7-9f78-0541d7e1784f,Namespace:kube-system,Attempt:0,}" Dec 13 14:42:23.717906 env[1553]: time="2024-12-13T14:42:23.717806811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mb5sk,Uid:86cb27c5-4db1-4a9a-ac10-7a4cc652124c,Namespace:kube-system,Attempt:0,}" Dec 13 14:42:24.340071 kubelet[1874]: E1213 14:42:24.339946 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:24.379565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222673397.mount: Deactivated successfully. Dec 13 14:42:24.380944 env[1553]: time="2024-12-13T14:42:24.380890293Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.381906 env[1553]: time="2024-12-13T14:42:24.381867901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.382596 env[1553]: time="2024-12-13T14:42:24.382527706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.383012 env[1553]: time="2024-12-13T14:42:24.382960818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.384338 env[1553]: time="2024-12-13T14:42:24.384291967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.384730 env[1553]: time="2024-12-13T14:42:24.384687722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.385108 env[1553]: time="2024-12-13T14:42:24.385067807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.406773 env[1553]: time="2024-12-13T14:42:24.406617228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:24.427939 env[1553]: time="2024-12-13T14:42:24.427905989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:42:24.428005 env[1553]: time="2024-12-13T14:42:24.427935381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:42:24.428005 env[1553]: time="2024-12-13T14:42:24.427945330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:42:24.428054 env[1553]: time="2024-12-13T14:42:24.428014161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984 pid=1942 runtime=io.containerd.runc.v2 Dec 13 14:42:24.428735 env[1553]: time="2024-12-13T14:42:24.428658903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:42:24.428735 env[1553]: time="2024-12-13T14:42:24.428692741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:42:24.428735 env[1553]: time="2024-12-13T14:42:24.428699959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:42:24.428901 env[1553]: time="2024-12-13T14:42:24.428777899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c19f162fc59b335fd66c763cf3ee9c187fe043b79fffcc5feb45a28f03a12553 pid=1950 runtime=io.containerd.runc.v2 Dec 13 14:42:24.436103 systemd[1]: Started cri-containerd-c19f162fc59b335fd66c763cf3ee9c187fe043b79fffcc5feb45a28f03a12553.scope. Dec 13 14:42:24.437584 systemd[1]: Started cri-containerd-cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984.scope. Dec 13 14:42:24.449302 env[1553]: time="2024-12-13T14:42:24.449264553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mb5sk,Uid:86cb27c5-4db1-4a9a-ac10-7a4cc652124c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\"" Dec 13 14:42:24.449412 env[1553]: time="2024-12-13T14:42:24.449266249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4w6d,Uid:1b86c8af-1860-40d7-9f78-0541d7e1784f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c19f162fc59b335fd66c763cf3ee9c187fe043b79fffcc5feb45a28f03a12553\"" Dec 13 14:42:24.450373 env[1553]: time="2024-12-13T14:42:24.450359455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:42:25.340127 kubelet[1874]: E1213 14:42:25.340106 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:25.451281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936328851.mount: Deactivated successfully. Dec 13 14:42:25.853797 env[1553]: time="2024-12-13T14:42:25.853774218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:25.854262 env[1553]: time="2024-12-13T14:42:25.854250602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:25.855155 env[1553]: time="2024-12-13T14:42:25.855143680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:25.855697 env[1553]: time="2024-12-13T14:42:25.855675750Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:25.856340 env[1553]: time="2024-12-13T14:42:25.856323573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:42:25.856951 env[1553]: time="2024-12-13T14:42:25.856938492Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:42:25.857719 env[1553]: time="2024-12-13T14:42:25.857705966Z" level=info msg="CreateContainer within sandbox \"c19f162fc59b335fd66c763cf3ee9c187fe043b79fffcc5feb45a28f03a12553\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:42:25.863700 env[1553]: time="2024-12-13T14:42:25.863662493Z" level=info msg="CreateContainer within sandbox \"c19f162fc59b335fd66c763cf3ee9c187fe043b79fffcc5feb45a28f03a12553\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d57ca637afcc0e6dc7898d639d281c1d166c355143cd57f30a8b8fd5727fa1d\"" Dec 13 14:42:25.864151 env[1553]: time="2024-12-13T14:42:25.864105131Z" level=info msg="StartContainer for \"1d57ca637afcc0e6dc7898d639d281c1d166c355143cd57f30a8b8fd5727fa1d\"" Dec 13 14:42:25.875917 systemd[1]: Started cri-containerd-1d57ca637afcc0e6dc7898d639d281c1d166c355143cd57f30a8b8fd5727fa1d.scope. Dec 13 14:42:25.888138 env[1553]: time="2024-12-13T14:42:25.888109009Z" level=info msg="StartContainer for \"1d57ca637afcc0e6dc7898d639d281c1d166c355143cd57f30a8b8fd5727fa1d\" returns successfully" Dec 13 14:42:26.335633 sshd[1684]: Failed password for root from 218.92.0.157 port 54820 ssh2 Dec 13 14:42:26.341006 kubelet[1874]: E1213 14:42:26.340928 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:26.510569 kubelet[1874]: I1213 14:42:26.510426 1874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v4w6d" podStartSLOduration=3.103732827 podStartE2EDuration="4.510393554s" podCreationTimestamp="2024-12-13 14:42:22 +0000 UTC" firstStartedPulling="2024-12-13 14:42:24.450153666 +0000 UTC m=+2.526821260" lastFinishedPulling="2024-12-13 14:42:25.856814394 +0000 UTC m=+3.933481987" observedRunningTime="2024-12-13 14:42:26.510024696 +0000 UTC m=+4.586692356" watchObservedRunningTime="2024-12-13 14:42:26.510393554 +0000 UTC m=+4.587061217" Dec 13 14:42:26.878638 sshd[1684]: Received disconnect from 218.92.0.157 port 54820:11: [preauth] Dec 13 14:42:26.878638 sshd[1684]: Disconnected from authenticating user root 218.92.0.157 port 54820 [preauth] Dec 13 14:42:26.879279 sshd[1684]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Dec 13 14:42:26.881843 systemd[1]: sshd@3-145.40.90.151:22-218.92.0.157:54820.service: Deactivated successfully. Dec 13 14:42:27.341695 kubelet[1874]: E1213 14:42:27.341676 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:28.342138 kubelet[1874]: E1213 14:42:28.342057 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:29.342563 kubelet[1874]: E1213 14:42:29.342542 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:30.141079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592482771.mount: Deactivated successfully. Dec 13 14:42:30.343476 kubelet[1874]: E1213 14:42:30.343454 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:31.343996 kubelet[1874]: E1213 14:42:31.343979 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:31.818522 env[1553]: time="2024-12-13T14:42:31.818500809Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:31.819208 env[1553]: time="2024-12-13T14:42:31.819144397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:31.820543 env[1553]: time="2024-12-13T14:42:31.820473802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:31.821310 env[1553]: time="2024-12-13T14:42:31.821274974Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:42:31.823249 env[1553]: time="2024-12-13T14:42:31.823204918Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:42:31.827880 env[1553]: time="2024-12-13T14:42:31.827837432Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d\"" Dec 13 14:42:31.828147 env[1553]: time="2024-12-13T14:42:31.828083761Z" level=info msg="StartContainer for \"744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d\"" Dec 13 14:42:31.838845 systemd[1]: Started cri-containerd-744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d.scope. Dec 13 14:42:31.850721 env[1553]: time="2024-12-13T14:42:31.850697289Z" level=info msg="StartContainer for \"744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d\" returns successfully" Dec 13 14:42:31.855913 systemd[1]: cri-containerd-744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d.scope: Deactivated successfully. Dec 13 14:42:32.345113 kubelet[1874]: E1213 14:42:32.345014 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:32.830840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d-rootfs.mount: Deactivated successfully. Dec 13 14:42:33.227097 env[1553]: time="2024-12-13T14:42:33.226866211Z" level=info msg="shim disconnected" id=744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d Dec 13 14:42:33.227097 env[1553]: time="2024-12-13T14:42:33.226979715Z" level=warning msg="cleaning up after shim disconnected" id=744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d namespace=k8s.io Dec 13 14:42:33.227097 env[1553]: time="2024-12-13T14:42:33.227011125Z" level=info msg="cleaning up dead shim" Dec 13 14:42:33.239828 env[1553]: time="2024-12-13T14:42:33.239799480Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:42:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2242 runtime=io.containerd.runc.v2\n" Dec 13 14:42:33.345262 kubelet[1874]: E1213 14:42:33.345158 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:33.514337 env[1553]: time="2024-12-13T14:42:33.514150393Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:42:33.546371 env[1553]: time="2024-12-13T14:42:33.546334800Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb\"" Dec 13 14:42:33.546727 env[1553]: time="2024-12-13T14:42:33.546707446Z" level=info msg="StartContainer for \"d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb\"" Dec 13 14:42:33.555763 systemd[1]: Started cri-containerd-d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb.scope. Dec 13 14:42:33.569227 env[1553]: time="2024-12-13T14:42:33.569194830Z" level=info msg="StartContainer for \"d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb\" returns successfully" Dec 13 14:42:33.579551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:42:33.579768 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:42:33.579951 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:42:33.581229 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:42:33.581534 systemd[1]: cri-containerd-d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb.scope: Deactivated successfully. Dec 13 14:42:33.587978 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:42:33.614128 env[1553]: time="2024-12-13T14:42:33.614068324Z" level=info msg="shim disconnected" id=d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb Dec 13 14:42:33.614313 env[1553]: time="2024-12-13T14:42:33.614126960Z" level=warning msg="cleaning up after shim disconnected" id=d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb namespace=k8s.io Dec 13 14:42:33.614313 env[1553]: time="2024-12-13T14:42:33.614144450Z" level=info msg="cleaning up dead shim" Dec 13 14:42:33.625209 env[1553]: time="2024-12-13T14:42:33.625118409Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:42:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2306 runtime=io.containerd.runc.v2\n" Dec 13 14:42:33.832070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb-rootfs.mount: Deactivated successfully. Dec 13 14:42:34.346017 kubelet[1874]: E1213 14:42:34.345933 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:34.520843 env[1553]: time="2024-12-13T14:42:34.520703748Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:42:34.540669 env[1553]: time="2024-12-13T14:42:34.540618479Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb\"" Dec 13 14:42:34.540975 env[1553]: time="2024-12-13T14:42:34.540931445Z" level=info msg="StartContainer for \"bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb\"" Dec 13 14:42:34.550260 systemd[1]: Started cri-containerd-bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb.scope. Dec 13 14:42:34.563448 env[1553]: time="2024-12-13T14:42:34.563395533Z" level=info msg="StartContainer for \"bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb\" returns successfully" Dec 13 14:42:34.565057 systemd[1]: cri-containerd-bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb.scope: Deactivated successfully. Dec 13 14:42:34.576042 env[1553]: time="2024-12-13T14:42:34.576015861Z" level=info msg="shim disconnected" id=bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb Dec 13 14:42:34.576042 env[1553]: time="2024-12-13T14:42:34.576041217Z" level=warning msg="cleaning up after shim disconnected" id=bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb namespace=k8s.io Dec 13 14:42:34.576179 env[1553]: time="2024-12-13T14:42:34.576048333Z" level=info msg="cleaning up dead shim" Dec 13 14:42:34.579423 env[1553]: time="2024-12-13T14:42:34.579406356Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:42:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2360 runtime=io.containerd.runc.v2\n" Dec 13 14:42:34.833902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb-rootfs.mount: Deactivated successfully. Dec 13 14:42:35.347365 kubelet[1874]: E1213 14:42:35.347241 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:35.529008 env[1553]: time="2024-12-13T14:42:35.528840327Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:42:35.544023 env[1553]: time="2024-12-13T14:42:35.544003889Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447\"" Dec 13 14:42:35.544251 env[1553]: time="2024-12-13T14:42:35.544237135Z" level=info msg="StartContainer for \"5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447\"" Dec 13 14:42:35.553723 systemd[1]: Started cri-containerd-5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447.scope. Dec 13 14:42:35.564992 env[1553]: time="2024-12-13T14:42:35.564959302Z" level=info msg="StartContainer for \"5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447\" returns successfully" Dec 13 14:42:35.565191 systemd[1]: cri-containerd-5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447.scope: Deactivated successfully. Dec 13 14:42:35.574501 env[1553]: time="2024-12-13T14:42:35.574454359Z" level=info msg="shim disconnected" id=5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447 Dec 13 14:42:35.574610 env[1553]: time="2024-12-13T14:42:35.574502573Z" level=warning msg="cleaning up after shim disconnected" id=5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447 namespace=k8s.io Dec 13 14:42:35.574610 env[1553]: time="2024-12-13T14:42:35.574509922Z" level=info msg="cleaning up dead shim" Dec 13 14:42:35.578344 env[1553]: time="2024-12-13T14:42:35.578326648Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:42:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2412 runtime=io.containerd.runc.v2\n" Dec 13 14:42:35.834707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447-rootfs.mount: Deactivated successfully. Dec 13 14:42:36.348036 kubelet[1874]: E1213 14:42:36.347926 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:36.527648 env[1553]: time="2024-12-13T14:42:36.527596574Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:42:36.534638 env[1553]: time="2024-12-13T14:42:36.534604158Z" level=info msg="CreateContainer within sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\"" Dec 13 14:42:36.534931 env[1553]: time="2024-12-13T14:42:36.534904342Z" level=info msg="StartContainer for \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\"" Dec 13 14:42:36.548832 systemd[1]: Started cri-containerd-51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b.scope. Dec 13 14:42:36.561767 env[1553]: time="2024-12-13T14:42:36.561737320Z" level=info msg="StartContainer for \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\" returns successfully" Dec 13 14:42:36.618518 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:42:36.633174 kubelet[1874]: I1213 14:42:36.633162 1874 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:42:36.775472 kernel: Initializing XFRM netlink socket Dec 13 14:42:36.788524 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:42:37.348401 kubelet[1874]: E1213 14:42:37.348280 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:38.348684 kubelet[1874]: E1213 14:42:38.348556 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:38.400639 systemd-networkd[1297]: cilium_host: Link UP Dec 13 14:42:38.400722 systemd-networkd[1297]: cilium_net: Link UP Dec 13 14:42:38.407983 systemd-networkd[1297]: cilium_net: Gained carrier Dec 13 14:42:38.415225 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:42:38.415315 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:42:38.415331 systemd-networkd[1297]: cilium_host: Gained carrier Dec 13 14:42:38.462341 systemd-networkd[1297]: cilium_vxlan: Link UP Dec 13 14:42:38.462343 systemd-networkd[1297]: cilium_vxlan: Gained carrier Dec 13 14:42:38.536741 systemd-networkd[1297]: cilium_host: Gained IPv6LL Dec 13 14:42:38.597533 kernel: NET: Registered PF_ALG protocol family Dec 13 14:42:38.632657 systemd-networkd[1297]: cilium_net: Gained IPv6LL Dec 13 14:42:39.056356 systemd-networkd[1297]: lxc_health: Link UP Dec 13 14:42:39.077307 kubelet[1874]: I1213 14:42:39.077245 1874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mb5sk" podStartSLOduration=9.704976583 podStartE2EDuration="17.077229761s" podCreationTimestamp="2024-12-13 14:42:22 +0000 UTC" firstStartedPulling="2024-12-13 14:42:24.450183288 +0000 UTC m=+2.526850882" lastFinishedPulling="2024-12-13 14:42:31.822436469 +0000 UTC m=+9.899104060" observedRunningTime="2024-12-13 14:42:37.560558617 +0000 UTC m=+15.637226282" watchObservedRunningTime="2024-12-13 14:42:39.077229761 +0000 UTC m=+17.153897352" Dec 13 14:42:39.078355 systemd-networkd[1297]: lxc_health: Gained carrier Dec 13 14:42:39.078466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:42:39.080477 systemd[1]: Created slice kubepods-besteffort-pode54d1f5c_5b73_4967_adac_a3266feaa212.slice. Dec 13 14:42:39.084409 kubelet[1874]: I1213 14:42:39.084390 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snl7g\" (UniqueName: \"kubernetes.io/projected/e54d1f5c-5b73-4967-adac-a3266feaa212-kube-api-access-snl7g\") pod \"nginx-deployment-8587fbcb89-v8hn2\" (UID: \"e54d1f5c-5b73-4967-adac-a3266feaa212\") " pod="default/nginx-deployment-8587fbcb89-v8hn2" Dec 13 14:42:39.349569 kubelet[1874]: E1213 14:42:39.349460 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:39.382349 env[1553]: time="2024-12-13T14:42:39.382298377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-v8hn2,Uid:e54d1f5c-5b73-4967-adac-a3266feaa212,Namespace:default,Attempt:0,}" Dec 13 14:42:39.397515 systemd-networkd[1297]: lxc8d03416d5f77: Link UP Dec 13 14:42:39.418523 kernel: eth0: renamed from tmp12111 Dec 13 14:42:39.441999 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:42:39.442046 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8d03416d5f77: link becomes ready Dec 13 14:42:39.442256 systemd-networkd[1297]: lxc8d03416d5f77: Gained carrier Dec 13 14:42:39.785257 systemd-networkd[1297]: cilium_vxlan: Gained IPv6LL Dec 13 14:42:40.350057 kubelet[1874]: E1213 14:42:40.350031 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:40.488630 systemd-networkd[1297]: lxc_health: Gained IPv6LL Dec 13 14:42:41.000909 systemd-networkd[1297]: lxc8d03416d5f77: Gained IPv6LL Dec 13 14:42:41.350598 kubelet[1874]: E1213 14:42:41.350556 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:41.688314 env[1553]: time="2024-12-13T14:42:41.688229206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:42:41.688314 env[1553]: time="2024-12-13T14:42:41.688248784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:42:41.688314 env[1553]: time="2024-12-13T14:42:41.688255600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:42:41.688544 env[1553]: time="2024-12-13T14:42:41.688343948Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12111b283d73a0b871e22bbece9af38a51f20d0343ead6de9989c49e276d0da2 pid=3064 runtime=io.containerd.runc.v2 Dec 13 14:42:41.695003 systemd[1]: Started cri-containerd-12111b283d73a0b871e22bbece9af38a51f20d0343ead6de9989c49e276d0da2.scope. Dec 13 14:42:41.716907 env[1553]: time="2024-12-13T14:42:41.716882056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-v8hn2,Uid:e54d1f5c-5b73-4967-adac-a3266feaa212,Namespace:default,Attempt:0,} returns sandbox id \"12111b283d73a0b871e22bbece9af38a51f20d0343ead6de9989c49e276d0da2\"" Dec 13 14:42:41.717634 env[1553]: time="2024-12-13T14:42:41.717620542Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:42:42.339339 kubelet[1874]: E1213 14:42:42.339208 1874 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:42.351422 kubelet[1874]: E1213 14:42:42.351303 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:43.351940 kubelet[1874]: E1213 14:42:43.351868 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:43.811058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864603447.mount: Deactivated successfully. Dec 13 14:42:44.352346 kubelet[1874]: E1213 14:42:44.352291 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:44.656003 env[1553]: time="2024-12-13T14:42:44.655929879Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:44.656718 env[1553]: time="2024-12-13T14:42:44.656706222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:44.657727 env[1553]: time="2024-12-13T14:42:44.657716447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:44.658654 env[1553]: time="2024-12-13T14:42:44.658603956Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:44.659057 env[1553]: time="2024-12-13T14:42:44.659005536Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:42:44.660269 env[1553]: time="2024-12-13T14:42:44.660232550Z" level=info msg="CreateContainer within sandbox \"12111b283d73a0b871e22bbece9af38a51f20d0343ead6de9989c49e276d0da2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:42:44.664790 env[1553]: time="2024-12-13T14:42:44.664772756Z" level=info msg="CreateContainer within sandbox \"12111b283d73a0b871e22bbece9af38a51f20d0343ead6de9989c49e276d0da2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1b7d46209f94fa142f4ab4a154c50936f977243ab57ab46a301db20ae3a54c82\"" Dec 13 14:42:44.665075 env[1553]: time="2024-12-13T14:42:44.665060472Z" level=info msg="StartContainer for \"1b7d46209f94fa142f4ab4a154c50936f977243ab57ab46a301db20ae3a54c82\"" Dec 13 14:42:44.673882 systemd[1]: Started cri-containerd-1b7d46209f94fa142f4ab4a154c50936f977243ab57ab46a301db20ae3a54c82.scope. Dec 13 14:42:44.685182 env[1553]: time="2024-12-13T14:42:44.685157189Z" level=info msg="StartContainer for \"1b7d46209f94fa142f4ab4a154c50936f977243ab57ab46a301db20ae3a54c82\" returns successfully" Dec 13 14:42:45.353507 kubelet[1874]: E1213 14:42:45.353352 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:45.568275 kubelet[1874]: I1213 14:42:45.568135 1874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-v8hn2" podStartSLOduration=3.625891929 podStartE2EDuration="6.568098418s" podCreationTimestamp="2024-12-13 14:42:39 +0000 UTC" firstStartedPulling="2024-12-13 14:42:41.717465977 +0000 UTC m=+19.794133579" lastFinishedPulling="2024-12-13 14:42:44.659672477 +0000 UTC m=+22.736340068" observedRunningTime="2024-12-13 14:42:45.567775406 +0000 UTC m=+23.644443059" watchObservedRunningTime="2024-12-13 14:42:45.568098418 +0000 UTC m=+23.644766052" Dec 13 14:42:46.353652 kubelet[1874]: E1213 14:42:46.353530 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:47.353990 kubelet[1874]: E1213 14:42:47.353876 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:47.932358 update_engine[1549]: I1213 14:42:47.932236 1549 update_attempter.cc:509] Updating boot flags... Dec 13 14:42:48.355019 kubelet[1874]: E1213 14:42:48.354907 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:49.355321 kubelet[1874]: E1213 14:42:49.355208 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:50.355821 kubelet[1874]: E1213 14:42:50.355712 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:50.957014 systemd[1]: Created slice kubepods-besteffort-pod3e9e9c33_d476_40ad_9d8c_9e027b45a5c2.slice. Dec 13 14:42:50.962549 kubelet[1874]: I1213 14:42:50.962477 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3e9e9c33-d476-40ad-9d8c-9e027b45a5c2-data\") pod \"nfs-server-provisioner-0\" (UID: \"3e9e9c33-d476-40ad-9d8c-9e027b45a5c2\") " pod="default/nfs-server-provisioner-0" Dec 13 14:42:50.962825 kubelet[1874]: I1213 14:42:50.962614 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5s9\" (UniqueName: \"kubernetes.io/projected/3e9e9c33-d476-40ad-9d8c-9e027b45a5c2-kube-api-access-wj5s9\") pod \"nfs-server-provisioner-0\" (UID: \"3e9e9c33-d476-40ad-9d8c-9e027b45a5c2\") " pod="default/nfs-server-provisioner-0" Dec 13 14:42:51.263230 env[1553]: time="2024-12-13T14:42:51.263027071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3e9e9c33-d476-40ad-9d8c-9e027b45a5c2,Namespace:default,Attempt:0,}" Dec 13 14:42:51.291103 systemd-networkd[1297]: lxcaa4360d28e8b: Link UP Dec 13 14:42:51.306471 kernel: eth0: renamed from tmpb35e0 Dec 13 14:42:51.326196 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:42:51.326268 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaa4360d28e8b: link becomes ready Dec 13 14:42:51.326450 systemd-networkd[1297]: lxcaa4360d28e8b: Gained carrier Dec 13 14:42:51.356199 kubelet[1874]: E1213 14:42:51.356135 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:51.539039 env[1553]: time="2024-12-13T14:42:51.538921496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:42:51.539039 env[1553]: time="2024-12-13T14:42:51.538944580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:42:51.539039 env[1553]: time="2024-12-13T14:42:51.538953259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:42:51.539249 env[1553]: time="2024-12-13T14:42:51.539062743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b35e0c9abc9104f40853304f123cac8dcffccdec8c56b25966af027d6230f851 pid=3246 runtime=io.containerd.runc.v2 Dec 13 14:42:51.545771 systemd[1]: Started cri-containerd-b35e0c9abc9104f40853304f123cac8dcffccdec8c56b25966af027d6230f851.scope. Dec 13 14:42:51.567890 env[1553]: time="2024-12-13T14:42:51.567861849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3e9e9c33-d476-40ad-9d8c-9e027b45a5c2,Namespace:default,Attempt:0,} returns sandbox id \"b35e0c9abc9104f40853304f123cac8dcffccdec8c56b25966af027d6230f851\"" Dec 13 14:42:51.568438 env[1553]: time="2024-12-13T14:42:51.568426679Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:42:52.356505 kubelet[1874]: E1213 14:42:52.356472 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:53.032723 systemd-networkd[1297]: lxcaa4360d28e8b: Gained IPv6LL Dec 13 14:42:53.357402 kubelet[1874]: E1213 14:42:53.357350 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:53.491988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107333100.mount: Deactivated successfully. Dec 13 14:42:54.357985 kubelet[1874]: E1213 14:42:54.357940 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:54.657641 env[1553]: time="2024-12-13T14:42:54.657564589Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:54.658206 env[1553]: time="2024-12-13T14:42:54.658194287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:54.659173 env[1553]: time="2024-12-13T14:42:54.659161451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:54.660127 env[1553]: time="2024-12-13T14:42:54.660113413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:42:54.660597 env[1553]: time="2024-12-13T14:42:54.660567207Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:42:54.662880 env[1553]: time="2024-12-13T14:42:54.662682774Z" level=info msg="CreateContainer within sandbox \"b35e0c9abc9104f40853304f123cac8dcffccdec8c56b25966af027d6230f851\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:42:54.667678 env[1553]: time="2024-12-13T14:42:54.667634737Z" level=info msg="CreateContainer within sandbox \"b35e0c9abc9104f40853304f123cac8dcffccdec8c56b25966af027d6230f851\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"12ef7e96f03b6e688a5ae43ea7830c71fa9e03ac0803598e41ead895d34ea504\"" Dec 13 14:42:54.668025 env[1553]: time="2024-12-13T14:42:54.667945868Z" level=info msg="StartContainer for \"12ef7e96f03b6e688a5ae43ea7830c71fa9e03ac0803598e41ead895d34ea504\"" Dec 13 14:42:54.678659 systemd[1]: Started cri-containerd-12ef7e96f03b6e688a5ae43ea7830c71fa9e03ac0803598e41ead895d34ea504.scope. Dec 13 14:42:54.690099 env[1553]: time="2024-12-13T14:42:54.690060925Z" level=info msg="StartContainer for \"12ef7e96f03b6e688a5ae43ea7830c71fa9e03ac0803598e41ead895d34ea504\" returns successfully" Dec 13 14:42:55.358733 kubelet[1874]: E1213 14:42:55.358650 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:55.599548 kubelet[1874]: I1213 14:42:55.599391 1874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.506318615 podStartE2EDuration="5.599354267s" podCreationTimestamp="2024-12-13 14:42:50 +0000 UTC" firstStartedPulling="2024-12-13 14:42:51.568298844 +0000 UTC m=+29.644966434" lastFinishedPulling="2024-12-13 14:42:54.661334496 +0000 UTC m=+32.738002086" observedRunningTime="2024-12-13 14:42:55.599034544 +0000 UTC m=+33.675702196" watchObservedRunningTime="2024-12-13 14:42:55.599354267 +0000 UTC m=+33.676021902" Dec 13 14:42:56.359726 kubelet[1874]: E1213 14:42:56.359600 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:57.360294 kubelet[1874]: E1213 14:42:57.360175 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:58.361178 kubelet[1874]: E1213 14:42:58.361067 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:42:59.361832 kubelet[1874]: E1213 14:42:59.361758 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:00.362565 kubelet[1874]: E1213 14:43:00.362405 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:01.362807 kubelet[1874]: E1213 14:43:01.362721 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:02.338862 kubelet[1874]: E1213 14:43:02.338736 1874 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:02.363628 kubelet[1874]: E1213 14:43:02.363507 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:03.364626 kubelet[1874]: E1213 14:43:03.364534 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:04.365635 kubelet[1874]: E1213 14:43:04.365555 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:04.829884 systemd[1]: Created slice kubepods-besteffort-pod3ad9a2c4_186f_4ea5_a158_ec0a73813706.slice. Dec 13 14:43:04.958181 kubelet[1874]: I1213 14:43:04.958044 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d1b9b4cd-76a8-4603-9d25-e28ab593bdf1\" (UniqueName: \"kubernetes.io/nfs/3ad9a2c4-186f-4ea5-a158-ec0a73813706-pvc-d1b9b4cd-76a8-4603-9d25-e28ab593bdf1\") pod \"test-pod-1\" (UID: \"3ad9a2c4-186f-4ea5-a158-ec0a73813706\") " pod="default/test-pod-1" Dec 13 14:43:04.958181 kubelet[1874]: I1213 14:43:04.958167 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszc7\" (UniqueName: \"kubernetes.io/projected/3ad9a2c4-186f-4ea5-a158-ec0a73813706-kube-api-access-tszc7\") pod \"test-pod-1\" (UID: \"3ad9a2c4-186f-4ea5-a158-ec0a73813706\") " pod="default/test-pod-1" Dec 13 14:43:05.082487 kernel: FS-Cache: Loaded Dec 13 14:43:05.121773 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:43:05.121912 kernel: RPC: Registered udp transport module. Dec 13 14:43:05.121993 kernel: RPC: Registered tcp transport module. Dec 13 14:43:05.126682 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:43:05.189464 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:43:05.319089 kernel: NFS: Registering the id_resolver key type Dec 13 14:43:05.319138 kernel: Key type id_resolver registered Dec 13 14:43:05.319154 kernel: Key type id_legacy registered Dec 13 14:43:05.366174 kubelet[1874]: E1213 14:43:05.366101 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:05.541747 nfsidmap[3377]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-c5d7845087' Dec 13 14:43:05.622962 nfsidmap[3378]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-c5d7845087' Dec 13 14:43:05.736584 env[1553]: time="2024-12-13T14:43:05.736428880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3ad9a2c4-186f-4ea5-a158-ec0a73813706,Namespace:default,Attempt:0,}" Dec 13 14:43:05.763971 systemd-networkd[1297]: lxc5b1c24a78df0: Link UP Dec 13 14:43:05.779471 kernel: eth0: renamed from tmpf09e3 Dec 13 14:43:05.803142 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:43:05.803214 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5b1c24a78df0: link becomes ready Dec 13 14:43:05.803234 systemd-networkd[1297]: lxc5b1c24a78df0: Gained carrier Dec 13 14:43:05.918903 env[1553]: time="2024-12-13T14:43:05.918840064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:43:05.918903 env[1553]: time="2024-12-13T14:43:05.918861814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:43:05.918903 env[1553]: time="2024-12-13T14:43:05.918868795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:43:05.919028 env[1553]: time="2024-12-13T14:43:05.918962956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f09e3281ed632fa4b2b7a2840e99160de1404623c8f9d4d79ea638af06d5cb8d pid=3437 runtime=io.containerd.runc.v2 Dec 13 14:43:05.924514 systemd[1]: Started cri-containerd-f09e3281ed632fa4b2b7a2840e99160de1404623c8f9d4d79ea638af06d5cb8d.scope. Dec 13 14:43:05.947381 env[1553]: time="2024-12-13T14:43:05.947348973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3ad9a2c4-186f-4ea5-a158-ec0a73813706,Namespace:default,Attempt:0,} returns sandbox id \"f09e3281ed632fa4b2b7a2840e99160de1404623c8f9d4d79ea638af06d5cb8d\"" Dec 13 14:43:05.948101 env[1553]: time="2024-12-13T14:43:05.948064119Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:43:06.367322 kubelet[1874]: E1213 14:43:06.367207 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:06.497922 env[1553]: time="2024-12-13T14:43:06.497799074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:43:06.500355 env[1553]: time="2024-12-13T14:43:06.500248595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:43:06.505485 env[1553]: time="2024-12-13T14:43:06.505353651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:43:06.510387 env[1553]: time="2024-12-13T14:43:06.510280681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:43:06.512962 env[1553]: time="2024-12-13T14:43:06.512850239Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:43:06.518845 env[1553]: time="2024-12-13T14:43:06.518730236Z" level=info msg="CreateContainer within sandbox \"f09e3281ed632fa4b2b7a2840e99160de1404623c8f9d4d79ea638af06d5cb8d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:43:06.536882 env[1553]: time="2024-12-13T14:43:06.536862247Z" level=info msg="CreateContainer within sandbox \"f09e3281ed632fa4b2b7a2840e99160de1404623c8f9d4d79ea638af06d5cb8d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1fedde6c832d4e69636105c8fbef1c45d1b5230248eb7dd3c771ace9eb4a40fd\"" Dec 13 14:43:06.537303 env[1553]: time="2024-12-13T14:43:06.537259116Z" level=info msg="StartContainer for \"1fedde6c832d4e69636105c8fbef1c45d1b5230248eb7dd3c771ace9eb4a40fd\"" Dec 13 14:43:06.546749 systemd[1]: Started cri-containerd-1fedde6c832d4e69636105c8fbef1c45d1b5230248eb7dd3c771ace9eb4a40fd.scope. Dec 13 14:43:06.558047 env[1553]: time="2024-12-13T14:43:06.558023086Z" level=info msg="StartContainer for \"1fedde6c832d4e69636105c8fbef1c45d1b5230248eb7dd3c771ace9eb4a40fd\" returns successfully" Dec 13 14:43:06.633764 kubelet[1874]: I1213 14:43:06.633489 1874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.065394367 podStartE2EDuration="15.633430238s" podCreationTimestamp="2024-12-13 14:42:51 +0000 UTC" firstStartedPulling="2024-12-13 14:43:05.947919077 +0000 UTC m=+44.024586668" lastFinishedPulling="2024-12-13 14:43:06.515954898 +0000 UTC m=+44.592622539" observedRunningTime="2024-12-13 14:43:06.632848949 +0000 UTC m=+44.709516612" watchObservedRunningTime="2024-12-13 14:43:06.633430238 +0000 UTC m=+44.710097885" Dec 13 14:43:07.113132 systemd-networkd[1297]: lxc5b1c24a78df0: Gained IPv6LL Dec 13 14:43:07.367817 kubelet[1874]: E1213 14:43:07.367583 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:08.328521 env[1553]: time="2024-12-13T14:43:08.328425676Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:43:08.335016 env[1553]: time="2024-12-13T14:43:08.334971497Z" level=info msg="StopContainer for \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\" with timeout 2 (s)" Dec 13 14:43:08.335401 env[1553]: time="2024-12-13T14:43:08.335355498Z" level=info msg="Stop container \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\" with signal terminated" Dec 13 14:43:08.342553 systemd-networkd[1297]: lxc_health: Link DOWN Dec 13 14:43:08.342557 systemd-networkd[1297]: lxc_health: Lost carrier Dec 13 14:43:08.367950 kubelet[1874]: E1213 14:43:08.367796 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:08.415190 systemd[1]: cri-containerd-51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b.scope: Deactivated successfully. Dec 13 14:43:08.415823 systemd[1]: cri-containerd-51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b.scope: Consumed 4.377s CPU time. Dec 13 14:43:08.458590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b-rootfs.mount: Deactivated successfully. Dec 13 14:43:09.368148 kubelet[1874]: E1213 14:43:09.368034 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:09.580234 env[1553]: time="2024-12-13T14:43:09.580078199Z" level=info msg="shim disconnected" id=51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b Dec 13 14:43:09.580234 env[1553]: time="2024-12-13T14:43:09.580190649Z" level=warning msg="cleaning up after shim disconnected" id=51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b namespace=k8s.io Dec 13 14:43:09.580234 env[1553]: time="2024-12-13T14:43:09.580222120Z" level=info msg="cleaning up dead shim" Dec 13 14:43:09.596957 env[1553]: time="2024-12-13T14:43:09.596848995Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:43:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3578 runtime=io.containerd.runc.v2\n" Dec 13 14:43:09.599842 env[1553]: time="2024-12-13T14:43:09.599731563Z" level=info msg="StopContainer for \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\" returns successfully" Dec 13 14:43:09.600988 env[1553]: time="2024-12-13T14:43:09.600874458Z" level=info msg="StopPodSandbox for \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\"" Dec 13 14:43:09.601205 env[1553]: time="2024-12-13T14:43:09.601015316Z" level=info msg="Container to stop \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:43:09.601205 env[1553]: time="2024-12-13T14:43:09.601064190Z" level=info msg="Container to stop \"5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:43:09.601205 env[1553]: time="2024-12-13T14:43:09.601097860Z" level=info msg="Container to stop \"744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:43:09.601205 env[1553]: time="2024-12-13T14:43:09.601130529Z" level=info msg="Container to stop \"d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:43:09.601205 env[1553]: time="2024-12-13T14:43:09.601161214Z" level=info msg="Container to stop \"bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:43:09.606870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984-shm.mount: Deactivated successfully. Dec 13 14:43:09.615083 systemd[1]: cri-containerd-cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984.scope: Deactivated successfully. Dec 13 14:43:09.637044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984-rootfs.mount: Deactivated successfully. Dec 13 14:43:09.637996 env[1553]: time="2024-12-13T14:43:09.637957321Z" level=info msg="shim disconnected" id=cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984 Dec 13 14:43:09.637996 env[1553]: time="2024-12-13T14:43:09.637991777Z" level=warning msg="cleaning up after shim disconnected" id=cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984 namespace=k8s.io Dec 13 14:43:09.638073 env[1553]: time="2024-12-13T14:43:09.638000021Z" level=info msg="cleaning up dead shim" Dec 13 14:43:09.641393 env[1553]: time="2024-12-13T14:43:09.641377477Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:43:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3607 runtime=io.containerd.runc.v2\n" Dec 13 14:43:09.641549 env[1553]: time="2024-12-13T14:43:09.641537712Z" level=info msg="TearDown network for sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" successfully" Dec 13 14:43:09.641578 env[1553]: time="2024-12-13T14:43:09.641549908Z" level=info msg="StopPodSandbox for \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" returns successfully" Dec 13 14:43:09.795847 kubelet[1874]: I1213 14:43:09.795705 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hostproc\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.795847 kubelet[1874]: I1213 14:43:09.795821 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hubble-tls\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.796449 kubelet[1874]: I1213 14:43:09.795873 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-etc-cni-netd\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.796449 kubelet[1874]: I1213 14:43:09.795876 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hostproc" (OuterVolumeSpecName: "hostproc") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.796449 kubelet[1874]: I1213 14:43:09.795944 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s92tp\" (UniqueName: \"kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-kube-api-access-s92tp\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.796449 kubelet[1874]: I1213 14:43:09.796035 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-bpf-maps\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.796449 kubelet[1874]: I1213 14:43:09.796020 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.797430 kubelet[1874]: I1213 14:43:09.796104 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.797430 kubelet[1874]: I1213 14:43:09.796114 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-run\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.797430 kubelet[1874]: I1213 14:43:09.796179 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.797430 kubelet[1874]: I1213 14:43:09.796300 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-kernel\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.797430 kubelet[1874]: I1213 14:43:09.796381 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.798092 kubelet[1874]: I1213 14:43:09.796417 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-xtables-lock\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.798092 kubelet[1874]: I1213 14:43:09.796521 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.798092 kubelet[1874]: I1213 14:43:09.796541 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-cgroup\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.798092 kubelet[1874]: I1213 14:43:09.796585 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.798092 kubelet[1874]: I1213 14:43:09.796660 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-clustermesh-secrets\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.798636 kubelet[1874]: I1213 14:43:09.796750 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-net\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.798636 kubelet[1874]: I1213 14:43:09.796850 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-config-path\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.798636 kubelet[1874]: I1213 14:43:09.796869 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.798636 kubelet[1874]: I1213 14:43:09.796933 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cni-path\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.798636 kubelet[1874]: I1213 14:43:09.797013 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-lib-modules\") pod \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\" (UID: \"86cb27c5-4db1-4a9a-ac10-7a4cc652124c\") " Dec 13 14:43:09.799143 kubelet[1874]: I1213 14:43:09.797049 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cni-path" (OuterVolumeSpecName: "cni-path") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.799143 kubelet[1874]: I1213 14:43:09.797092 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:09.799143 kubelet[1874]: I1213 14:43:09.797132 1874 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-kernel\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.799143 kubelet[1874]: I1213 14:43:09.797199 1874 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-xtables-lock\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.799143 kubelet[1874]: I1213 14:43:09.797244 1874 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-bpf-maps\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.799143 kubelet[1874]: I1213 14:43:09.797286 1874 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-run\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.799143 kubelet[1874]: I1213 14:43:09.797330 1874 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-cgroup\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.799868 kubelet[1874]: I1213 14:43:09.797373 1874 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-host-proc-sys-net\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.799868 kubelet[1874]: I1213 14:43:09.797417 1874 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-etc-cni-netd\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.799868 kubelet[1874]: I1213 14:43:09.797475 1874 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hostproc\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.801766 kubelet[1874]: I1213 14:43:09.801730 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:43:09.801957 kubelet[1874]: I1213 14:43:09.801919 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:43:09.802138 kubelet[1874]: I1213 14:43:09.802070 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:43:09.802138 kubelet[1874]: I1213 14:43:09.802072 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-kube-api-access-s92tp" (OuterVolumeSpecName: "kube-api-access-s92tp") pod "86cb27c5-4db1-4a9a-ac10-7a4cc652124c" (UID: "86cb27c5-4db1-4a9a-ac10-7a4cc652124c"). InnerVolumeSpecName "kube-api-access-s92tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:43:09.802929 systemd[1]: var-lib-kubelet-pods-86cb27c5\x2d4db1\x2d4a9a\x2dac10\x2d7a4cc652124c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds92tp.mount: Deactivated successfully. Dec 13 14:43:09.802979 systemd[1]: var-lib-kubelet-pods-86cb27c5\x2d4db1\x2d4a9a\x2dac10\x2d7a4cc652124c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:43:09.803014 systemd[1]: var-lib-kubelet-pods-86cb27c5\x2d4db1\x2d4a9a\x2dac10\x2d7a4cc652124c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:43:09.897957 kubelet[1874]: I1213 14:43:09.897733 1874 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cni-path\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.897957 kubelet[1874]: I1213 14:43:09.897803 1874 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-lib-modules\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.897957 kubelet[1874]: I1213 14:43:09.897835 1874 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-cilium-config-path\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.897957 kubelet[1874]: I1213 14:43:09.897867 1874 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s92tp\" (UniqueName: \"kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-kube-api-access-s92tp\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.897957 kubelet[1874]: I1213 14:43:09.897893 1874 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-hubble-tls\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:09.897957 kubelet[1874]: I1213 14:43:09.897916 1874 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86cb27c5-4db1-4a9a-ac10-7a4cc652124c-clustermesh-secrets\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:10.369227 kubelet[1874]: E1213 14:43:10.369109 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:10.495217 systemd[1]: Removed slice kubepods-burstable-pod86cb27c5_4db1_4a9a_ac10_7a4cc652124c.slice. Dec 13 14:43:10.495267 systemd[1]: kubepods-burstable-pod86cb27c5_4db1_4a9a_ac10_7a4cc652124c.slice: Consumed 4.430s CPU time. Dec 13 14:43:10.635035 kubelet[1874]: I1213 14:43:10.634822 1874 scope.go:117] "RemoveContainer" containerID="51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b" Dec 13 14:43:10.637850 env[1553]: time="2024-12-13T14:43:10.637727949Z" level=info msg="RemoveContainer for \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\"" Dec 13 14:43:10.642145 env[1553]: time="2024-12-13T14:43:10.642028548Z" level=info msg="RemoveContainer for \"51a5ad868e4266175e33ecc6577280bbef3c7a62b3890d99503c84acceced01b\" returns successfully" Dec 13 14:43:10.642594 kubelet[1874]: I1213 14:43:10.642505 1874 scope.go:117] "RemoveContainer" containerID="5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447" Dec 13 14:43:10.645032 env[1553]: time="2024-12-13T14:43:10.644928270Z" level=info msg="RemoveContainer for \"5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447\"" Dec 13 14:43:10.648999 env[1553]: time="2024-12-13T14:43:10.648888047Z" level=info msg="RemoveContainer for \"5e7f894926acdc1124ac917427defa86435951cd40d36b48cba2bf95ae7f2447\" returns successfully" Dec 13 14:43:10.649431 kubelet[1874]: I1213 14:43:10.649365 1874 scope.go:117] "RemoveContainer" containerID="bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb" Dec 13 14:43:10.652135 env[1553]: time="2024-12-13T14:43:10.652019776Z" level=info msg="RemoveContainer for \"bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb\"" Dec 13 14:43:10.656147 env[1553]: time="2024-12-13T14:43:10.656034333Z" level=info msg="RemoveContainer for \"bca5a5ae8a8e69047296fec69061c481f68939f4a5be51ea3f57ceb88f0d1ffb\" returns successfully" Dec 13 14:43:10.656428 kubelet[1874]: I1213 14:43:10.656390 1874 scope.go:117] "RemoveContainer" containerID="d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb" Dec 13 14:43:10.658819 env[1553]: time="2024-12-13T14:43:10.658698088Z" level=info msg="RemoveContainer for \"d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb\"" Dec 13 14:43:10.677513 env[1553]: time="2024-12-13T14:43:10.677391254Z" level=info msg="RemoveContainer for \"d08c38a520c5ae7ad766fb361b8ea8d23a4aaf6c25e755a442f50e34e26641fb\" returns successfully" Dec 13 14:43:10.677890 kubelet[1874]: I1213 14:43:10.677781 1874 scope.go:117] "RemoveContainer" containerID="744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d" Dec 13 14:43:10.680229 env[1553]: time="2024-12-13T14:43:10.680113284Z" level=info msg="RemoveContainer for \"744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d\"" Dec 13 14:43:10.685517 env[1553]: time="2024-12-13T14:43:10.685391026Z" level=info msg="RemoveContainer for \"744764cb2bb25dca8b3ba20c65e76ce835d64b12f2248b0303cc7f590890668d\" returns successfully" Dec 13 14:43:10.820696 kubelet[1874]: E1213 14:43:10.820588 1874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86cb27c5-4db1-4a9a-ac10-7a4cc652124c" containerName="mount-cgroup" Dec 13 14:43:10.820696 kubelet[1874]: E1213 14:43:10.820642 1874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86cb27c5-4db1-4a9a-ac10-7a4cc652124c" containerName="mount-bpf-fs" Dec 13 14:43:10.820696 kubelet[1874]: E1213 14:43:10.820663 1874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86cb27c5-4db1-4a9a-ac10-7a4cc652124c" containerName="clean-cilium-state" Dec 13 14:43:10.820696 kubelet[1874]: E1213 14:43:10.820680 1874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86cb27c5-4db1-4a9a-ac10-7a4cc652124c" containerName="cilium-agent" Dec 13 14:43:10.820696 kubelet[1874]: E1213 14:43:10.820697 1874 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86cb27c5-4db1-4a9a-ac10-7a4cc652124c" containerName="apply-sysctl-overwrites" Dec 13 14:43:10.821407 kubelet[1874]: I1213 14:43:10.820746 1874 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cb27c5-4db1-4a9a-ac10-7a4cc652124c" containerName="cilium-agent" Dec 13 14:43:10.835700 systemd[1]: Created slice kubepods-besteffort-pod9dd0684a_8190_4dfc_9fc4_54a5cc7eaf9d.slice. Dec 13 14:43:10.847608 systemd[1]: Created slice kubepods-burstable-podfc297468_74ea_41ba_91ea_ee74fceccb96.slice. Dec 13 14:43:10.997295 kubelet[1874]: E1213 14:43:10.997049 1874 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-zj2f8 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-dj2vs" podUID="fc297468-74ea-41ba-91ea-ee74fceccb96" Dec 13 14:43:11.007021 kubelet[1874]: I1213 14:43:11.006886 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f752c\" (UniqueName: \"kubernetes.io/projected/9dd0684a-8190-4dfc-9fc4-54a5cc7eaf9d-kube-api-access-f752c\") pod \"cilium-operator-5d85765b45-pvfsc\" (UID: \"9dd0684a-8190-4dfc-9fc4-54a5cc7eaf9d\") " pod="kube-system/cilium-operator-5d85765b45-pvfsc" Dec 13 14:43:11.007021 kubelet[1874]: I1213 14:43:11.007007 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-bpf-maps\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.007433 kubelet[1874]: I1213 14:43:11.007074 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-xtables-lock\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.007433 kubelet[1874]: I1213 14:43:11.007145 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9dd0684a-8190-4dfc-9fc4-54a5cc7eaf9d-cilium-config-path\") pod \"cilium-operator-5d85765b45-pvfsc\" (UID: \"9dd0684a-8190-4dfc-9fc4-54a5cc7eaf9d\") " pod="kube-system/cilium-operator-5d85765b45-pvfsc" Dec 13 14:43:11.007433 kubelet[1874]: I1213 14:43:11.007209 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-net\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.007433 kubelet[1874]: I1213 14:43:11.007269 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-etc-cni-netd\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.007433 kubelet[1874]: I1213 14:43:11.007327 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-lib-modules\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008005 kubelet[1874]: I1213 14:43:11.007387 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-clustermesh-secrets\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008005 kubelet[1874]: I1213 14:43:11.007454 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-kernel\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008005 kubelet[1874]: I1213 14:43:11.007535 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-hubble-tls\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008005 kubelet[1874]: I1213 14:43:11.007590 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-hostproc\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008005 kubelet[1874]: I1213 14:43:11.007698 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-cgroup\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008005 kubelet[1874]: I1213 14:43:11.007784 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cni-path\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008609 kubelet[1874]: I1213 14:43:11.007837 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-config-path\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008609 kubelet[1874]: I1213 14:43:11.007988 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-run\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008609 kubelet[1874]: I1213 14:43:11.008077 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2f8\" (UniqueName: \"kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-kube-api-access-zj2f8\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.008609 kubelet[1874]: I1213 14:43:11.008131 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-ipsec-secrets\") pod \"cilium-dj2vs\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " pod="kube-system/cilium-dj2vs" Dec 13 14:43:11.141134 env[1553]: time="2024-12-13T14:43:11.141084457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pvfsc,Uid:9dd0684a-8190-4dfc-9fc4-54a5cc7eaf9d,Namespace:kube-system,Attempt:0,}" Dec 13 14:43:11.148581 env[1553]: time="2024-12-13T14:43:11.148467103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:43:11.148581 env[1553]: time="2024-12-13T14:43:11.148496932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:43:11.148581 env[1553]: time="2024-12-13T14:43:11.148507231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:43:11.148744 env[1553]: time="2024-12-13T14:43:11.148663232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/595f1bbf9a0ff8adce31fa8dc70b8fc2d2adabb420c913a22e639ed73076767a pid=3634 runtime=io.containerd.runc.v2 Dec 13 14:43:11.159823 systemd[1]: Started cri-containerd-595f1bbf9a0ff8adce31fa8dc70b8fc2d2adabb420c913a22e639ed73076767a.scope. Dec 13 14:43:11.195080 env[1553]: time="2024-12-13T14:43:11.195022561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pvfsc,Uid:9dd0684a-8190-4dfc-9fc4-54a5cc7eaf9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"595f1bbf9a0ff8adce31fa8dc70b8fc2d2adabb420c913a22e639ed73076767a\"" Dec 13 14:43:11.195939 env[1553]: time="2024-12-13T14:43:11.195920847Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:43:11.369817 kubelet[1874]: E1213 14:43:11.369698 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:11.815824 kubelet[1874]: I1213 14:43:11.815726 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-xtables-lock\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.816142 kubelet[1874]: I1213 14:43:11.815845 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-etc-cni-netd\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.816142 kubelet[1874]: I1213 14:43:11.815871 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.816142 kubelet[1874]: I1213 14:43:11.815912 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-hubble-tls\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.816142 kubelet[1874]: I1213 14:43:11.815969 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cni-path\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.816142 kubelet[1874]: I1213 14:43:11.815985 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.816142 kubelet[1874]: I1213 14:43:11.816020 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-lib-modules\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.816900 kubelet[1874]: I1213 14:43:11.816067 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-net\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.816900 kubelet[1874]: I1213 14:43:11.816078 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cni-path" (OuterVolumeSpecName: "cni-path") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.816900 kubelet[1874]: I1213 14:43:11.816120 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-kernel\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.816900 kubelet[1874]: I1213 14:43:11.816149 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.816900 kubelet[1874]: I1213 14:43:11.816152 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.817424 kubelet[1874]: I1213 14:43:11.816178 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj2f8\" (UniqueName: \"kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-kube-api-access-zj2f8\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.817424 kubelet[1874]: I1213 14:43:11.816212 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.817424 kubelet[1874]: I1213 14:43:11.816298 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-bpf-maps\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.817424 kubelet[1874]: I1213 14:43:11.816374 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-clustermesh-secrets\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.817424 kubelet[1874]: I1213 14:43:11.816401 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.818003 kubelet[1874]: I1213 14:43:11.816423 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-hostproc\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.818003 kubelet[1874]: I1213 14:43:11.816501 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-cgroup\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.818003 kubelet[1874]: I1213 14:43:11.816526 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-hostproc" (OuterVolumeSpecName: "hostproc") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.818003 kubelet[1874]: I1213 14:43:11.816558 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-config-path\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.818003 kubelet[1874]: I1213 14:43:11.816610 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-run\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.818003 kubelet[1874]: I1213 14:43:11.816629 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.818623 kubelet[1874]: I1213 14:43:11.816658 1874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-ipsec-secrets\") pod \"fc297468-74ea-41ba-91ea-ee74fceccb96\" (UID: \"fc297468-74ea-41ba-91ea-ee74fceccb96\") " Dec 13 14:43:11.818623 kubelet[1874]: I1213 14:43:11.816759 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:43:11.818623 kubelet[1874]: I1213 14:43:11.816822 1874 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-cgroup\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.818623 kubelet[1874]: I1213 14:43:11.816890 1874 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-bpf-maps\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.818623 kubelet[1874]: I1213 14:43:11.816940 1874 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-hostproc\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.818623 kubelet[1874]: I1213 14:43:11.816985 1874 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-xtables-lock\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.818623 kubelet[1874]: I1213 14:43:11.817030 1874 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-etc-cni-netd\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.819300 kubelet[1874]: I1213 14:43:11.817077 1874 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cni-path\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.819300 kubelet[1874]: I1213 14:43:11.817122 1874 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-lib-modules\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.819300 kubelet[1874]: I1213 14:43:11.817166 1874 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-net\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.819300 kubelet[1874]: I1213 14:43:11.817222 1874 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-host-proc-sys-kernel\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.821167 kubelet[1874]: I1213 14:43:11.821135 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:43:11.821627 kubelet[1874]: I1213 14:43:11.821553 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-kube-api-access-zj2f8" (OuterVolumeSpecName: "kube-api-access-zj2f8") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "kube-api-access-zj2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:43:11.821627 kubelet[1874]: I1213 14:43:11.821613 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:43:11.821714 kubelet[1874]: I1213 14:43:11.821628 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:43:11.821714 kubelet[1874]: I1213 14:43:11.821643 1874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fc297468-74ea-41ba-91ea-ee74fceccb96" (UID: "fc297468-74ea-41ba-91ea-ee74fceccb96"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:43:11.917633 kubelet[1874]: I1213 14:43:11.917534 1874 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-hubble-tls\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.917633 kubelet[1874]: I1213 14:43:11.917601 1874 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zj2f8\" (UniqueName: \"kubernetes.io/projected/fc297468-74ea-41ba-91ea-ee74fceccb96-kube-api-access-zj2f8\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.917633 kubelet[1874]: I1213 14:43:11.917628 1874 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-clustermesh-secrets\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.917633 kubelet[1874]: I1213 14:43:11.917651 1874 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-config-path\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.918129 kubelet[1874]: I1213 14:43:11.917674 1874 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-run\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:11.918129 kubelet[1874]: I1213 14:43:11.917694 1874 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc297468-74ea-41ba-91ea-ee74fceccb96-cilium-ipsec-secrets\") on node \"10.67.80.13\" DevicePath \"\"" Dec 13 14:43:12.120431 systemd[1]: var-lib-kubelet-pods-fc297468\x2d74ea\x2d41ba\x2d91ea\x2dee74fceccb96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzj2f8.mount: Deactivated successfully. Dec 13 14:43:12.120522 systemd[1]: var-lib-kubelet-pods-fc297468\x2d74ea\x2d41ba\x2d91ea\x2dee74fceccb96-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:43:12.120571 systemd[1]: var-lib-kubelet-pods-fc297468\x2d74ea\x2d41ba\x2d91ea\x2dee74fceccb96-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:43:12.120603 systemd[1]: var-lib-kubelet-pods-fc297468\x2d74ea\x2d41ba\x2d91ea\x2dee74fceccb96-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:43:12.370100 kubelet[1874]: E1213 14:43:12.370017 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:12.415741 kubelet[1874]: E1213 14:43:12.415500 1874 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:43:12.490204 kubelet[1874]: I1213 14:43:12.490095 1874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86cb27c5-4db1-4a9a-ac10-7a4cc652124c" path="/var/lib/kubelet/pods/86cb27c5-4db1-4a9a-ac10-7a4cc652124c/volumes" Dec 13 14:43:12.499869 systemd[1]: Removed slice kubepods-burstable-podfc297468_74ea_41ba_91ea_ee74fceccb96.slice. Dec 13 14:43:12.711341 systemd[1]: Created slice kubepods-burstable-pod512e9b77_6d32_46e8_8b59_41093be356ff.slice. Dec 13 14:43:12.824716 kubelet[1874]: I1213 14:43:12.824581 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-hostproc\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.824716 kubelet[1874]: I1213 14:43:12.824711 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-cni-path\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825135 kubelet[1874]: I1213 14:43:12.824779 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/512e9b77-6d32-46e8-8b59-41093be356ff-cilium-ipsec-secrets\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825135 kubelet[1874]: I1213 14:43:12.824843 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-cilium-run\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825135 kubelet[1874]: I1213 14:43:12.824904 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-bpf-maps\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825135 kubelet[1874]: I1213 14:43:12.824961 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-cilium-cgroup\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825135 kubelet[1874]: I1213 14:43:12.825017 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-host-proc-sys-net\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825135 kubelet[1874]: I1213 14:43:12.825076 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-etc-cni-netd\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825812 kubelet[1874]: I1213 14:43:12.825140 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-xtables-lock\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825812 kubelet[1874]: I1213 14:43:12.825189 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/512e9b77-6d32-46e8-8b59-41093be356ff-cilium-config-path\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825812 kubelet[1874]: I1213 14:43:12.825250 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/512e9b77-6d32-46e8-8b59-41093be356ff-hubble-tls\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825812 kubelet[1874]: I1213 14:43:12.825316 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-lib-modules\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825812 kubelet[1874]: I1213 14:43:12.825370 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/512e9b77-6d32-46e8-8b59-41093be356ff-clustermesh-secrets\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.825812 kubelet[1874]: I1213 14:43:12.825431 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/512e9b77-6d32-46e8-8b59-41093be356ff-host-proc-sys-kernel\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:12.826492 kubelet[1874]: I1213 14:43:12.825521 1874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwdv\" (UniqueName: \"kubernetes.io/projected/512e9b77-6d32-46e8-8b59-41093be356ff-kube-api-access-pcwdv\") pod \"cilium-nkt7s\" (UID: \"512e9b77-6d32-46e8-8b59-41093be356ff\") " pod="kube-system/cilium-nkt7s" Dec 13 14:43:13.030373 env[1553]: time="2024-12-13T14:43:13.030138429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkt7s,Uid:512e9b77-6d32-46e8-8b59-41093be356ff,Namespace:kube-system,Attempt:0,}" Dec 13 14:43:13.046158 env[1553]: time="2024-12-13T14:43:13.046045840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:43:13.046158 env[1553]: time="2024-12-13T14:43:13.046081018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:43:13.046158 env[1553]: time="2024-12-13T14:43:13.046102277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:43:13.046327 env[1553]: time="2024-12-13T14:43:13.046267162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721 pid=3683 runtime=io.containerd.runc.v2 Dec 13 14:43:13.052350 systemd[1]: Started cri-containerd-9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721.scope. Dec 13 14:43:13.065286 env[1553]: time="2024-12-13T14:43:13.065257129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkt7s,Uid:512e9b77-6d32-46e8-8b59-41093be356ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\"" Dec 13 14:43:13.066741 env[1553]: time="2024-12-13T14:43:13.066698630Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:43:13.071862 env[1553]: time="2024-12-13T14:43:13.071812340Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918\"" Dec 13 14:43:13.072156 env[1553]: time="2024-12-13T14:43:13.072109506Z" level=info msg="StartContainer for \"07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918\"" Dec 13 14:43:13.083570 systemd[1]: Started cri-containerd-07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918.scope. Dec 13 14:43:13.102991 env[1553]: time="2024-12-13T14:43:13.102951590Z" level=info msg="StartContainer for \"07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918\" returns successfully" Dec 13 14:43:13.110614 systemd[1]: cri-containerd-07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918.scope: Deactivated successfully. Dec 13 14:43:13.126638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918-rootfs.mount: Deactivated successfully. Dec 13 14:43:13.132285 env[1553]: time="2024-12-13T14:43:13.132243680Z" level=info msg="shim disconnected" id=07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918 Dec 13 14:43:13.132422 env[1553]: time="2024-12-13T14:43:13.132287753Z" level=warning msg="cleaning up after shim disconnected" id=07e4673dbc14bd6b7326d267cabf2c25bc9019f9871899bfbd12c5b06b2fc918 namespace=k8s.io Dec 13 14:43:13.132422 env[1553]: time="2024-12-13T14:43:13.132299210Z" level=info msg="cleaning up dead shim" Dec 13 14:43:13.138983 env[1553]: time="2024-12-13T14:43:13.138923268Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3765 runtime=io.containerd.runc.v2\n" Dec 13 14:43:13.370382 kubelet[1874]: E1213 14:43:13.370266 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:13.655305 env[1553]: time="2024-12-13T14:43:13.655069527Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:43:13.666960 kubelet[1874]: I1213 14:43:13.666831 1874 setters.go:600] "Node became not ready" node="10.67.80.13" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:43:13Z","lastTransitionTime":"2024-12-13T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:43:13.669645 env[1553]: time="2024-12-13T14:43:13.669518500Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0\"" Dec 13 14:43:13.669952 env[1553]: time="2024-12-13T14:43:13.669894537Z" level=info msg="StartContainer for \"dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0\"" Dec 13 14:43:13.678232 systemd[1]: Started cri-containerd-dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0.scope. Dec 13 14:43:13.691392 env[1553]: time="2024-12-13T14:43:13.691345309Z" level=info msg="StartContainer for \"dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0\" returns successfully" Dec 13 14:43:13.694983 systemd[1]: cri-containerd-dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0.scope: Deactivated successfully. Dec 13 14:43:13.704987 env[1553]: time="2024-12-13T14:43:13.704923729Z" level=info msg="shim disconnected" id=dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0 Dec 13 14:43:13.704987 env[1553]: time="2024-12-13T14:43:13.704953581Z" level=warning msg="cleaning up after shim disconnected" id=dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0 namespace=k8s.io Dec 13 14:43:13.704987 env[1553]: time="2024-12-13T14:43:13.704959851Z" level=info msg="cleaning up dead shim" Dec 13 14:43:13.708731 env[1553]: time="2024-12-13T14:43:13.708709150Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3824 runtime=io.containerd.runc.v2\n" Dec 13 14:43:14.120844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dea6f90a33f7a2a7c2a3fbf8f733a9daadba0264cde16ebfc901dbab73b8e0d0-rootfs.mount: Deactivated successfully. Dec 13 14:43:14.371536 kubelet[1874]: E1213 14:43:14.371281 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:14.489791 kubelet[1874]: I1213 14:43:14.489679 1874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc297468-74ea-41ba-91ea-ee74fceccb96" path="/var/lib/kubelet/pods/fc297468-74ea-41ba-91ea-ee74fceccb96/volumes" Dec 13 14:43:14.569182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477904543.mount: Deactivated successfully. Dec 13 14:43:14.653236 env[1553]: time="2024-12-13T14:43:14.653140550Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:43:14.660576 env[1553]: time="2024-12-13T14:43:14.660510245Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"70eeb1b271457c7083e275b7d258c91689dc3659962ee62f5629d8fff842f522\"" Dec 13 14:43:14.660810 env[1553]: time="2024-12-13T14:43:14.660766631Z" level=info msg="StartContainer for \"70eeb1b271457c7083e275b7d258c91689dc3659962ee62f5629d8fff842f522\"" Dec 13 14:43:14.670907 systemd[1]: Started cri-containerd-70eeb1b271457c7083e275b7d258c91689dc3659962ee62f5629d8fff842f522.scope. Dec 13 14:43:14.684684 env[1553]: time="2024-12-13T14:43:14.684615188Z" level=info msg="StartContainer for \"70eeb1b271457c7083e275b7d258c91689dc3659962ee62f5629d8fff842f522\" returns successfully" Dec 13 14:43:14.686109 systemd[1]: cri-containerd-70eeb1b271457c7083e275b7d258c91689dc3659962ee62f5629d8fff842f522.scope: Deactivated successfully. Dec 13 14:43:14.758883 env[1553]: time="2024-12-13T14:43:14.758765518Z" level=info msg="shim disconnected" id=70eeb1b271457c7083e275b7d258c91689dc3659962ee62f5629d8fff842f522 Dec 13 14:43:14.759285 env[1553]: time="2024-12-13T14:43:14.758881475Z" level=warning msg="cleaning up after shim disconnected" id=70eeb1b271457c7083e275b7d258c91689dc3659962ee62f5629d8fff842f522 namespace=k8s.io Dec 13 14:43:14.759285 env[1553]: time="2024-12-13T14:43:14.758916208Z" level=info msg="cleaning up dead shim" Dec 13 14:43:14.770064 env[1553]: time="2024-12-13T14:43:14.769991795Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:43:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3879 runtime=io.containerd.runc.v2\n" Dec 13 14:43:15.016589 env[1553]: time="2024-12-13T14:43:15.016435862Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:43:15.017242 env[1553]: time="2024-12-13T14:43:15.017192322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:43:15.017874 env[1553]: time="2024-12-13T14:43:15.017859383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:43:15.018196 env[1553]: time="2024-12-13T14:43:15.018181463Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:43:15.019604 env[1553]: time="2024-12-13T14:43:15.019569823Z" level=info msg="CreateContainer within sandbox \"595f1bbf9a0ff8adce31fa8dc70b8fc2d2adabb420c913a22e639ed73076767a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:43:15.024426 env[1553]: time="2024-12-13T14:43:15.024407744Z" level=info msg="CreateContainer within sandbox \"595f1bbf9a0ff8adce31fa8dc70b8fc2d2adabb420c913a22e639ed73076767a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1d151eba77dcfd0d900e969ab551868ce0bc84dbf47f1956eabca431f55da149\"" Dec 13 14:43:15.024810 env[1553]: time="2024-12-13T14:43:15.024759216Z" level=info msg="StartContainer for \"1d151eba77dcfd0d900e969ab551868ce0bc84dbf47f1956eabca431f55da149\"" Dec 13 14:43:15.033136 systemd[1]: Started cri-containerd-1d151eba77dcfd0d900e969ab551868ce0bc84dbf47f1956eabca431f55da149.scope. Dec 13 14:43:15.046861 env[1553]: time="2024-12-13T14:43:15.046820768Z" level=info msg="StartContainer for \"1d151eba77dcfd0d900e969ab551868ce0bc84dbf47f1956eabca431f55da149\" returns successfully" Dec 13 14:43:15.372304 kubelet[1874]: E1213 14:43:15.372183 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:15.672443 env[1553]: time="2024-12-13T14:43:15.672249939Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:43:15.688207 env[1553]: time="2024-12-13T14:43:15.688088347Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0\"" Dec 13 14:43:15.688601 env[1553]: time="2024-12-13T14:43:15.688588644Z" level=info msg="StartContainer for \"91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0\"" Dec 13 14:43:15.690421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707506364.mount: Deactivated successfully. Dec 13 14:43:15.697911 systemd[1]: Started cri-containerd-91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0.scope. Dec 13 14:43:15.709752 kubelet[1874]: I1213 14:43:15.709722 1874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pvfsc" podStartSLOduration=1.886598635 podStartE2EDuration="5.709710211s" podCreationTimestamp="2024-12-13 14:43:10 +0000 UTC" firstStartedPulling="2024-12-13 14:43:11.195714337 +0000 UTC m=+49.272381936" lastFinishedPulling="2024-12-13 14:43:15.018825922 +0000 UTC m=+53.095493512" observedRunningTime="2024-12-13 14:43:15.677148755 +0000 UTC m=+53.753816425" watchObservedRunningTime="2024-12-13 14:43:15.709710211 +0000 UTC m=+53.786377802" Dec 13 14:43:15.710277 env[1553]: time="2024-12-13T14:43:15.710247546Z" level=info msg="StartContainer for \"91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0\" returns successfully" Dec 13 14:43:15.710333 systemd[1]: cri-containerd-91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0.scope: Deactivated successfully. Dec 13 14:43:15.817260 env[1553]: time="2024-12-13T14:43:15.817140659Z" level=info msg="shim disconnected" id=91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0 Dec 13 14:43:15.817818 env[1553]: time="2024-12-13T14:43:15.817262207Z" level=warning msg="cleaning up after shim disconnected" id=91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0 namespace=k8s.io Dec 13 14:43:15.817818 env[1553]: time="2024-12-13T14:43:15.817310025Z" level=info msg="cleaning up dead shim" Dec 13 14:43:15.830332 env[1553]: time="2024-12-13T14:43:15.830316997Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:43:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3983 runtime=io.containerd.runc.v2\n" Dec 13 14:43:16.119788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91836f8f4ca99b3cf97a987a91d41a48f6ec6621ebf5e68303f8d7bf67a599c0-rootfs.mount: Deactivated successfully. Dec 13 14:43:16.373730 kubelet[1874]: E1213 14:43:16.373505 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:16.678235 env[1553]: time="2024-12-13T14:43:16.677998954Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:43:16.697244 env[1553]: time="2024-12-13T14:43:16.697222497Z" level=info msg="CreateContainer within sandbox \"9b71ca0373cabcd159f7a6dad25ca1b90dfa800dfedb5ceff78a0fde61fbc721\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0886f6db12a0a8ce903f0a8618cc463933ae9f4cba25624c682f78eae02e8fed\"" Dec 13 14:43:16.697559 env[1553]: time="2024-12-13T14:43:16.697531018Z" level=info msg="StartContainer for \"0886f6db12a0a8ce903f0a8618cc463933ae9f4cba25624c682f78eae02e8fed\"" Dec 13 14:43:16.706638 systemd[1]: Started cri-containerd-0886f6db12a0a8ce903f0a8618cc463933ae9f4cba25624c682f78eae02e8fed.scope. Dec 13 14:43:16.720156 env[1553]: time="2024-12-13T14:43:16.720105352Z" level=info msg="StartContainer for \"0886f6db12a0a8ce903f0a8618cc463933ae9f4cba25624c682f78eae02e8fed\" returns successfully" Dec 13 14:43:16.871518 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:43:17.373977 kubelet[1874]: E1213 14:43:17.373848 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:17.719553 kubelet[1874]: I1213 14:43:17.719324 1874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nkt7s" podStartSLOduration=5.719288776 podStartE2EDuration="5.719288776s" podCreationTimestamp="2024-12-13 14:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:43:17.719000454 +0000 UTC m=+55.795668120" watchObservedRunningTime="2024-12-13 14:43:17.719288776 +0000 UTC m=+55.795956419" Dec 13 14:43:18.374708 kubelet[1874]: E1213 14:43:18.374681 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:19.375567 kubelet[1874]: E1213 14:43:19.375501 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:19.908315 systemd-networkd[1297]: lxc_health: Link UP Dec 13 14:43:19.938506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:43:19.938628 systemd-networkd[1297]: lxc_health: Gained carrier Dec 13 14:43:20.375825 kubelet[1874]: E1213 14:43:20.375774 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:21.320601 systemd-networkd[1297]: lxc_health: Gained IPv6LL Dec 13 14:43:21.376258 kubelet[1874]: E1213 14:43:21.376200 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:22.339605 kubelet[1874]: E1213 14:43:22.339535 1874 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:22.362555 env[1553]: time="2024-12-13T14:43:22.362455421Z" level=info msg="StopPodSandbox for \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\"" Dec 13 14:43:22.363376 env[1553]: time="2024-12-13T14:43:22.362666517Z" level=info msg="TearDown network for sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" successfully" Dec 13 14:43:22.363376 env[1553]: time="2024-12-13T14:43:22.362751799Z" level=info msg="StopPodSandbox for \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" returns successfully" Dec 13 14:43:22.363686 env[1553]: time="2024-12-13T14:43:22.363507434Z" level=info msg="RemovePodSandbox for \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\"" Dec 13 14:43:22.363686 env[1553]: time="2024-12-13T14:43:22.363600060Z" level=info msg="Forcibly stopping sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\"" Dec 13 14:43:22.363885 env[1553]: time="2024-12-13T14:43:22.363785818Z" level=info msg="TearDown network for sandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" successfully" Dec 13 14:43:22.367807 env[1553]: time="2024-12-13T14:43:22.367720407Z" level=info msg="RemovePodSandbox \"cb134abd5e6fa6278eef9caca36f7fb6fb7257346a937544688aadcfe8380984\" returns successfully" Dec 13 14:43:22.376823 kubelet[1874]: E1213 14:43:22.376759 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:23.377360 kubelet[1874]: E1213 14:43:23.377245 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:24.378211 kubelet[1874]: E1213 14:43:24.378091 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:25.379407 kubelet[1874]: E1213 14:43:25.379282 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:43:26.379730 kubelet[1874]: E1213 14:43:26.379592 1874 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"