Dec 16 13:02:53.889385 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:02:53.889406 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:02:53.889415 kernel: BIOS-provided physical RAM map: Dec 16 13:02:53.889425 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:02:53.889440 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:02:53.889447 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:02:53.889455 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:02:53.889462 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:02:53.889472 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:02:53.889478 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:02:53.889485 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 16 13:02:53.889492 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 16 13:02:53.889501 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 16 13:02:53.889508 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 16 13:02:53.889516 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 16 13:02:53.889523 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 16 13:02:53.889532 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 16 13:02:53.889542 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 16 13:02:53.889549 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 16 13:02:53.889556 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 16 13:02:53.889563 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 16 13:02:53.889570 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 16 13:02:53.889577 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:02:53.889584 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:02:53.889591 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:02:53.889598 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:02:53.889605 kernel: NX (Execute Disable) protection: active Dec 16 13:02:53.889612 kernel: APIC: Static calls initialized Dec 16 13:02:53.889621 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Dec 16 13:02:53.889629 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Dec 16 13:02:53.889636 kernel: extended physical RAM map: Dec 16 13:02:53.889643 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:02:53.889650 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:02:53.889657 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:02:53.889680 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:02:53.889688 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:02:53.889695 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:02:53.889702 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:02:53.889709 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Dec 16 13:02:53.889719 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Dec 16 13:02:53.889730 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Dec 16 13:02:53.889737 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Dec 16 13:02:53.889745 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Dec 16 13:02:53.889752 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 16 13:02:53.889762 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 16 13:02:53.889769 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 16 13:02:53.889777 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 16 13:02:53.889784 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 16 13:02:53.889792 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 16 13:02:53.889799 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 16 13:02:53.889806 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 16 13:02:53.889814 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 16 13:02:53.889823 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 16 13:02:53.889832 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 16 13:02:53.889842 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:02:53.889855 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:02:53.889862 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:02:53.889870 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:02:53.889880 kernel: efi: EFI v2.7 by EDK II Dec 16 13:02:53.889888 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Dec 16 13:02:53.889895 kernel: random: crng init done Dec 16 13:02:53.889905 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 16 13:02:53.889912 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 16 13:02:53.889922 kernel: secureboot: Secure boot disabled Dec 16 13:02:53.889929 kernel: SMBIOS 2.8 present. Dec 16 13:02:53.889936 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 13:02:53.889946 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:02:53.889953 kernel: Hypervisor detected: KVM Dec 16 13:02:53.889961 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 16 13:02:53.889968 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:02:53.889976 kernel: kvm-clock: using sched offset of 5057167741 cycles Dec 16 13:02:53.889984 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:02:53.889991 kernel: tsc: Detected 2794.748 MHz processor Dec 16 13:02:53.889999 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:02:53.890007 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:02:53.890014 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 16 13:02:53.890022 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:02:53.890032 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:02:53.890039 kernel: Using GB pages for direct mapping Dec 16 13:02:53.890047 kernel: ACPI: Early table checksum verification disabled Dec 16 13:02:53.890054 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 16 13:02:53.890062 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 13:02:53.890070 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:53.890077 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:53.890084 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 16 13:02:53.890092 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:53.890102 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:53.890109 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:53.890117 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:02:53.890124 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 13:02:53.890132 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 16 13:02:53.890139 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Dec 16 13:02:53.890147 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 16 13:02:53.890154 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 16 13:02:53.890164 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 16 13:02:53.890171 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 16 13:02:53.890179 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 16 13:02:53.890186 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 16 13:02:53.890194 kernel: No NUMA configuration found Dec 16 13:02:53.890202 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 16 13:02:53.890209 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Dec 16 13:02:53.890217 kernel: Zone ranges: Dec 16 13:02:53.890224 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:02:53.890232 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 16 13:02:53.890242 kernel: Normal empty Dec 16 13:02:53.890249 kernel: Device empty Dec 16 13:02:53.890256 kernel: Movable zone start for each node Dec 16 13:02:53.890264 kernel: Early memory node ranges Dec 16 13:02:53.890271 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:02:53.890281 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 16 13:02:53.890289 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 16 13:02:53.890296 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 16 13:02:53.890304 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 16 13:02:53.890314 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 16 13:02:53.890321 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Dec 16 13:02:53.890328 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Dec 16 13:02:53.890336 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 16 13:02:53.890346 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:02:53.890360 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:02:53.890371 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 16 13:02:53.890378 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:02:53.890386 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 16 13:02:53.890394 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 16 13:02:53.890402 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:02:53.890410 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 13:02:53.890420 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 16 13:02:53.890428 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:02:53.890448 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:02:53.890456 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:02:53.890464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:02:53.890474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:02:53.890482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:02:53.890490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:02:53.890498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:02:53.890506 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:02:53.890514 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:02:53.890522 kernel: TSC deadline timer available Dec 16 13:02:53.890530 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:02:53.890537 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:02:53.890548 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:02:53.890555 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:02:53.890563 kernel: CPU topo: Num. cores per package: 4 Dec 16 13:02:53.890571 kernel: CPU topo: Num. threads per package: 4 Dec 16 13:02:53.890578 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 13:02:53.890586 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:02:53.890594 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:02:53.890602 kernel: kvm-guest: setup PV sched yield Dec 16 13:02:53.890609 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 13:02:53.890619 kernel: Booting paravirtualized kernel on KVM Dec 16 13:02:53.890627 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:02:53.890635 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 13:02:53.890643 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 13:02:53.890651 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 13:02:53.890659 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 13:02:53.890682 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:02:53.890690 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:02:53.890702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:02:53.890713 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:02:53.890721 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:02:53.890729 kernel: Fallback order for Node 0: 0 Dec 16 13:02:53.890736 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Dec 16 13:02:53.890744 kernel: Policy zone: DMA32 Dec 16 13:02:53.890752 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:02:53.890760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 13:02:53.890767 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:02:53.890775 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:02:53.890785 kernel: Dynamic Preempt: voluntary Dec 16 13:02:53.890793 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:02:53.890807 kernel: rcu: RCU event tracing is enabled. Dec 16 13:02:53.890815 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 13:02:53.890825 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:02:53.890836 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:02:53.890846 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:02:53.890855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:02:53.890866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 13:02:53.890887 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:02:53.890895 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:02:53.890903 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:02:53.890911 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 13:02:53.890919 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:02:53.890927 kernel: Console: colour dummy device 80x25 Dec 16 13:02:53.890935 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:02:53.890942 kernel: ACPI: Core revision 20240827 Dec 16 13:02:53.890950 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:02:53.890960 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:02:53.890968 kernel: x2apic enabled Dec 16 13:02:53.890976 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:02:53.890984 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:02:53.890992 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:02:53.891000 kernel: kvm-guest: setup PV IPIs Dec 16 13:02:53.891007 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:02:53.891015 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 13:02:53.891023 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 16 13:02:53.891034 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:02:53.891041 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:02:53.891049 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:02:53.891057 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:02:53.891065 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:02:53.891073 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:02:53.891081 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 13:02:53.891088 kernel: active return thunk: retbleed_return_thunk Dec 16 13:02:53.891096 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 13:02:53.891109 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:02:53.891118 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:02:53.891129 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:02:53.891140 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:02:53.891150 kernel: active return thunk: srso_return_thunk Dec 16 13:02:53.891160 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:02:53.891170 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:02:53.891180 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:02:53.891194 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:02:53.891204 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:02:53.891214 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 13:02:53.891225 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:02:53.891235 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:02:53.891245 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:02:53.891255 kernel: landlock: Up and running. Dec 16 13:02:53.891265 kernel: SELinux: Initializing. Dec 16 13:02:53.891275 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:02:53.891290 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:02:53.891301 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 13:02:53.891311 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:02:53.891318 kernel: ... version: 0 Dec 16 13:02:53.891326 kernel: ... bit width: 48 Dec 16 13:02:53.891334 kernel: ... generic registers: 6 Dec 16 13:02:53.891341 kernel: ... value mask: 0000ffffffffffff Dec 16 13:02:53.891349 kernel: ... max period: 00007fffffffffff Dec 16 13:02:53.891357 kernel: ... fixed-purpose events: 0 Dec 16 13:02:53.891368 kernel: ... event mask: 000000000000003f Dec 16 13:02:53.891376 kernel: signal: max sigframe size: 1776 Dec 16 13:02:53.891383 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:02:53.891391 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:02:53.891403 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:02:53.891411 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:02:53.891418 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:02:53.891426 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 13:02:53.891441 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 13:02:53.891453 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 16 13:02:53.891461 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145392K reserved, 0K cma-reserved) Dec 16 13:02:53.891469 kernel: devtmpfs: initialized Dec 16 13:02:53.891477 kernel: x86/mm: Memory block size: 128MB Dec 16 13:02:53.891484 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 16 13:02:53.891492 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 16 13:02:53.891500 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 16 13:02:53.891508 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 16 13:02:53.891516 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Dec 16 13:02:53.891526 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 16 13:02:53.891534 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:02:53.891542 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 13:02:53.891550 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:02:53.891558 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:02:53.891566 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:02:53.891574 kernel: audit: type=2000 audit(1765890171.408:1): state=initialized audit_enabled=0 res=1 Dec 16 13:02:53.891581 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:02:53.891589 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:02:53.891599 kernel: cpuidle: using governor menu Dec 16 13:02:53.891607 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:02:53.891615 kernel: dca service started, version 1.12.1 Dec 16 13:02:53.891622 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 13:02:53.891630 kernel: PCI: Using configuration type 1 for base access Dec 16 13:02:53.891638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:02:53.891646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:02:53.891654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:02:53.891676 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:02:53.891687 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:02:53.891695 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:02:53.891702 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:02:53.891710 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:02:53.891718 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:02:53.891726 kernel: ACPI: Interpreter enabled Dec 16 13:02:53.891733 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:02:53.891741 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:02:53.891749 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:02:53.891759 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:02:53.891766 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:02:53.891774 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:02:53.892005 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:02:53.892133 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:02:53.892257 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:02:53.892267 kernel: PCI host bridge to bus 0000:00 Dec 16 13:02:53.892491 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:02:53.892610 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:02:53.892762 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:02:53.892907 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 13:02:53.893043 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 13:02:53.893187 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:02:53.893335 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:02:53.893544 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:02:53.893734 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:02:53.893889 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 13:02:53.894035 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 13:02:53.894183 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:02:53.894348 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:02:53.894541 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:02:53.894760 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 13:02:53.894926 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 13:02:53.895088 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 13:02:53.895272 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:02:53.895445 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 13:02:53.895607 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 13:02:53.895800 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 13:02:53.896001 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:02:53.896182 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 13:02:53.896347 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 13:02:53.896519 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 13:02:53.896700 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 13:02:53.896868 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:02:53.897021 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:02:53.897165 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:02:53.897289 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 13:02:53.897412 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 13:02:53.897560 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:02:53.897717 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 13:02:53.897729 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:02:53.897742 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:02:53.897750 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:02:53.897759 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:02:53.897767 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:02:53.897775 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:02:53.897783 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:02:53.897791 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:02:53.897799 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:02:53.897807 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:02:53.897818 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:02:53.897829 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:02:53.897840 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:02:53.897850 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:02:53.897861 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:02:53.897871 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:02:53.897879 kernel: iommu: Default domain type: Translated Dec 16 13:02:53.897888 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:02:53.897897 kernel: efivars: Registered efivars operations Dec 16 13:02:53.897909 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:02:53.897919 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:02:53.897927 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 16 13:02:53.897935 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 16 13:02:53.897943 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Dec 16 13:02:53.897951 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Dec 16 13:02:53.897959 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 16 13:02:53.897966 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 16 13:02:53.897974 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Dec 16 13:02:53.897984 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 16 13:02:53.898115 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:02:53.898260 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:02:53.898417 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:02:53.898445 kernel: vgaarb: loaded Dec 16 13:02:53.898456 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:02:53.898466 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:02:53.898476 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:02:53.898491 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:02:53.898501 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:02:53.898512 kernel: pnp: PnP ACPI init Dec 16 13:02:53.898770 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 13:02:53.898795 kernel: pnp: PnP ACPI: found 6 devices Dec 16 13:02:53.898806 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:02:53.898818 kernel: NET: Registered PF_INET protocol family Dec 16 13:02:53.898829 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:02:53.898844 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:02:53.898856 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:02:53.898868 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:02:53.898879 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:02:53.898891 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:02:53.898902 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:02:53.898913 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:02:53.898924 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:02:53.898934 kernel: NET: Registered PF_XDP protocol family Dec 16 13:02:53.899097 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 13:02:53.899238 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 13:02:53.899367 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:02:53.899505 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:02:53.899631 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:02:53.899779 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 13:02:53.899916 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 13:02:53.900049 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:02:53.900062 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:02:53.900074 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 13:02:53.900087 kernel: Initialise system trusted keyrings Dec 16 13:02:53.900098 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:02:53.900111 kernel: Key type asymmetric registered Dec 16 13:02:53.900121 kernel: Asymmetric key parser 'x509' registered Dec 16 13:02:53.900132 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:02:53.900142 kernel: io scheduler mq-deadline registered Dec 16 13:02:53.900152 kernel: io scheduler kyber registered Dec 16 13:02:53.900165 kernel: io scheduler bfq registered Dec 16 13:02:53.900176 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:02:53.900187 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:02:53.900197 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:02:53.900208 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 13:02:53.900221 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:02:53.900231 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:02:53.900242 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:02:53.900252 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:02:53.900263 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:02:53.900420 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 13:02:53.900444 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:02:53.900575 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 13:02:53.900839 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T13:02:53 UTC (1765890173) Dec 16 13:02:53.900976 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 13:02:53.900988 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:02:53.900996 kernel: efifb: probing for efifb Dec 16 13:02:53.901005 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 13:02:53.901013 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 13:02:53.901022 kernel: efifb: scrolling: redraw Dec 16 13:02:53.901031 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:02:53.901039 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 13:02:53.901052 kernel: fb0: EFI VGA frame buffer device Dec 16 13:02:53.901061 kernel: pstore: Using crash dump compression: deflate Dec 16 13:02:53.901069 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:02:53.901078 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:02:53.901086 kernel: Segment Routing with IPv6 Dec 16 13:02:53.901095 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:02:53.901103 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:02:53.901112 kernel: Key type dns_resolver registered Dec 16 13:02:53.901120 kernel: IPI shorthand broadcast: enabled Dec 16 13:02:53.901132 kernel: sched_clock: Marking stable (3370002683, 289243844)->(3746070357, -86823830) Dec 16 13:02:53.901144 kernel: registered taskstats version 1 Dec 16 13:02:53.901154 kernel: Loading compiled-in X.509 certificates Dec 16 13:02:53.901165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:02:53.901175 kernel: Demotion targets for Node 0: null Dec 16 13:02:53.901186 kernel: Key type .fscrypt registered Dec 16 13:02:53.901196 kernel: Key type fscrypt-provisioning registered Dec 16 13:02:53.901207 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:02:53.901217 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:02:53.901232 kernel: ima: No architecture policies found Dec 16 13:02:53.901242 kernel: clk: Disabling unused clocks Dec 16 13:02:53.901253 kernel: Warning: unable to open an initial console. Dec 16 13:02:53.901265 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:02:53.901275 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:02:53.901287 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:02:53.901297 kernel: Run /init as init process Dec 16 13:02:53.901308 kernel: with arguments: Dec 16 13:02:53.901320 kernel: /init Dec 16 13:02:53.901336 kernel: with environment: Dec 16 13:02:53.901347 kernel: HOME=/ Dec 16 13:02:53.901358 kernel: TERM=linux Dec 16 13:02:53.901371 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:02:53.901387 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:02:53.901398 systemd[1]: Detected virtualization kvm. Dec 16 13:02:53.901407 systemd[1]: Detected architecture x86-64. Dec 16 13:02:53.901419 systemd[1]: Running in initrd. Dec 16 13:02:53.901427 systemd[1]: No hostname configured, using default hostname. Dec 16 13:02:53.901447 systemd[1]: Hostname set to . Dec 16 13:02:53.901456 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:02:53.901464 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:02:53.901473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:02:53.901482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:02:53.901491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:02:53.901500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:02:53.901512 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:02:53.901521 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:02:53.901534 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:02:53.901543 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:02:53.901552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:02:53.901561 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:02:53.901572 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:02:53.901581 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:02:53.901590 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:02:53.901599 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:02:53.901607 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:02:53.901616 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:02:53.901625 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:02:53.901634 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:02:53.901643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:02:53.901654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:02:53.901663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:02:53.901687 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:02:53.901696 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:02:53.901705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:02:53.901713 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:02:53.901723 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:02:53.901732 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:02:53.901744 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:02:53.901753 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:02:53.901762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:53.901801 systemd-journald[202]: Collecting audit messages is disabled. Dec 16 13:02:53.901828 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:02:53.901841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:02:53.901853 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:02:53.901863 systemd-journald[202]: Journal started Dec 16 13:02:53.901885 systemd-journald[202]: Runtime Journal (/run/log/journal/4e43485b28ea4727ac9ce0aa6011bc1f) is 6M, max 48.1M, 42.1M free. Dec 16 13:02:53.906418 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:02:53.909915 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:02:53.911860 systemd-modules-load[205]: Inserted module 'overlay' Dec 16 13:02:53.926830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:02:53.930344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:53.944686 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:02:53.947260 systemd-modules-load[205]: Inserted module 'br_netfilter' Dec 16 13:02:53.948803 kernel: Bridge firewalling registered Dec 16 13:02:53.950499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:02:53.951427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:02:53.955631 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:02:53.957928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:02:53.965713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:02:53.977186 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:02:53.978089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:02:53.982642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:02:53.983893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:02:53.988523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:02:54.003700 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:02:54.008786 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:02:54.046878 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:02:54.050736 systemd-resolved[241]: Positive Trust Anchors: Dec 16 13:02:54.050754 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:02:54.050782 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:02:54.053321 systemd-resolved[241]: Defaulting to hostname 'linux'. Dec 16 13:02:54.054566 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:02:54.056315 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:02:54.174700 kernel: SCSI subsystem initialized Dec 16 13:02:54.184691 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:02:54.195691 kernel: iscsi: registered transport (tcp) Dec 16 13:02:54.221751 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:02:54.221804 kernel: QLogic iSCSI HBA Driver Dec 16 13:02:54.244325 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:02:54.275876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:02:54.278290 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:02:54.340469 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:02:54.345494 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:02:54.409713 kernel: raid6: avx2x4 gen() 28073 MB/s Dec 16 13:02:54.426703 kernel: raid6: avx2x2 gen() 28016 MB/s Dec 16 13:02:54.444541 kernel: raid6: avx2x1 gen() 23575 MB/s Dec 16 13:02:54.444586 kernel: raid6: using algorithm avx2x4 gen() 28073 MB/s Dec 16 13:02:54.462558 kernel: raid6: .... xor() 7264 MB/s, rmw enabled Dec 16 13:02:54.462594 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:02:54.483701 kernel: xor: automatically using best checksumming function avx Dec 16 13:02:54.655705 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:02:54.664472 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:02:54.667127 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:02:54.705057 systemd-udevd[454]: Using default interface naming scheme 'v255'. Dec 16 13:02:54.710897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:02:54.712562 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:02:54.744145 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Dec 16 13:02:54.776956 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:02:54.782127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:02:54.874493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:02:54.881865 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:02:54.915713 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 13:02:54.921276 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 13:02:54.929709 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:02:54.929744 kernel: GPT:9289727 != 19775487 Dec 16 13:02:54.929762 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:02:54.929776 kernel: GPT:9289727 != 19775487 Dec 16 13:02:54.929799 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:02:54.929817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:02:54.938695 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:02:54.952718 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:02:54.964698 kernel: libata version 3.00 loaded. Dec 16 13:02:54.974109 kernel: AES CTR mode by8 optimization enabled Dec 16 13:02:54.991703 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:02:54.992022 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:02:54.999012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:02:55.001849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:55.006845 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:02:55.007120 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:02:55.007357 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:02:55.013748 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:55.018821 kernel: scsi host0: ahci Dec 16 13:02:55.019065 kernel: scsi host1: ahci Dec 16 13:02:55.019223 kernel: scsi host2: ahci Dec 16 13:02:55.019703 kernel: scsi host3: ahci Dec 16 13:02:55.020777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:55.034599 kernel: scsi host4: ahci Dec 16 13:02:55.034837 kernel: scsi host5: ahci Dec 16 13:02:55.035039 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Dec 16 13:02:55.035056 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Dec 16 13:02:55.035071 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Dec 16 13:02:55.035086 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Dec 16 13:02:55.035106 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Dec 16 13:02:55.035121 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Dec 16 13:02:55.035051 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:02:55.047038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 13:02:55.065512 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 13:02:55.067141 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 13:02:55.079995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:02:55.098301 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 13:02:55.102268 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:02:55.104119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:02:55.104187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:55.107901 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:55.118494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:02:55.119370 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:02:55.156060 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:55.164391 disk-uuid[620]: Primary Header is updated. Dec 16 13:02:55.164391 disk-uuid[620]: Secondary Entries is updated. Dec 16 13:02:55.164391 disk-uuid[620]: Secondary Header is updated. Dec 16 13:02:55.170685 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:02:55.175699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:02:55.341100 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:55.341190 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:55.342699 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 13:02:55.344719 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:55.344806 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:55.345715 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:02:55.347648 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 13:02:55.347688 kernel: ata3.00: applying bridge limits Dec 16 13:02:55.349685 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:02:55.349712 kernel: ata3.00: configured for UDMA/100 Dec 16 13:02:55.352696 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:02:55.352731 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 13:02:55.421280 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 13:02:55.421685 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:02:55.433778 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 13:02:55.845120 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:02:55.847837 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:02:55.851352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:02:55.853561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:02:55.856376 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:02:55.878966 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:02:56.178725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:02:56.179696 disk-uuid[626]: The operation has completed successfully. Dec 16 13:02:56.211869 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:02:56.212038 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:02:56.250470 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:02:56.282850 sh[654]: Success Dec 16 13:02:56.306244 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:02:56.306278 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:02:56.308231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:02:56.320705 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:02:56.352073 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:02:56.365136 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:02:56.379382 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:02:56.386206 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (666) Dec 16 13:02:56.389965 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:02:56.390008 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:56.395876 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:02:56.395923 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:02:56.397340 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:02:56.400789 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:02:56.404962 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:02:56.406189 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:02:56.412278 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:02:56.447700 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (701) Dec 16 13:02:56.450719 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:56.450749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:56.456471 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:02:56.456496 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:02:56.462719 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:56.464095 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:02:56.468995 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:02:56.564057 ignition[751]: Ignition 2.22.0 Dec 16 13:02:56.565047 ignition[751]: Stage: fetch-offline Dec 16 13:02:56.565572 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:02:56.565139 ignition[751]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:56.568764 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:02:56.565162 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:02:56.567031 ignition[751]: parsed url from cmdline: "" Dec 16 13:02:56.567038 ignition[751]: no config URL provided Dec 16 13:02:56.567052 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:02:56.567072 ignition[751]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:02:56.567113 ignition[751]: op(1): [started] loading QEMU firmware config module Dec 16 13:02:56.567127 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 13:02:56.590580 ignition[751]: op(1): [finished] loading QEMU firmware config module Dec 16 13:02:56.590611 ignition[751]: QEMU firmware config was not found. Ignoring... Dec 16 13:02:56.593681 ignition[751]: parsing config with SHA512: 09ddb7352d96d6aa4bdae21bd7de1b9bbbf56378ef0b20371d216c3102af6d2bdf7f8d14046331329bded04ac8b70a76e4521bfc9f37c097f3fb01da8f944d36 Dec 16 13:02:56.598896 unknown[751]: fetched base config from "system" Dec 16 13:02:56.598915 unknown[751]: fetched user config from "qemu" Dec 16 13:02:56.600050 ignition[751]: fetch-offline: fetch-offline passed Dec 16 13:02:56.600131 ignition[751]: Ignition finished successfully Dec 16 13:02:56.603904 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:02:56.621787 systemd-networkd[842]: lo: Link UP Dec 16 13:02:56.621796 systemd-networkd[842]: lo: Gained carrier Dec 16 13:02:56.623380 systemd-networkd[842]: Enumeration completed Dec 16 13:02:56.623762 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:56.623767 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:02:56.624984 systemd-networkd[842]: eth0: Link UP Dec 16 13:02:56.625130 systemd-networkd[842]: eth0: Gained carrier Dec 16 13:02:56.625139 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:02:56.625852 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:02:56.627179 systemd[1]: Reached target network.target - Network. Dec 16 13:02:56.627707 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 13:02:56.636999 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:02:56.663770 systemd-networkd[842]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:02:56.674942 ignition[848]: Ignition 2.22.0 Dec 16 13:02:56.674958 ignition[848]: Stage: kargs Dec 16 13:02:56.675087 ignition[848]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:56.675097 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:02:56.675701 ignition[848]: kargs: kargs passed Dec 16 13:02:56.675749 ignition[848]: Ignition finished successfully Dec 16 13:02:56.682516 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:02:56.684586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:02:56.717409 ignition[857]: Ignition 2.22.0 Dec 16 13:02:56.717422 ignition[857]: Stage: disks Dec 16 13:02:56.717543 ignition[857]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:56.717554 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:02:56.718293 ignition[857]: disks: disks passed Dec 16 13:02:56.722106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:02:56.718337 ignition[857]: Ignition finished successfully Dec 16 13:02:56.723437 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:02:56.725717 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:02:56.726237 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:02:56.732131 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:02:56.735069 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:02:56.739479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:02:56.764749 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:02:56.776473 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:02:56.778369 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:02:56.879692 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:02:56.880289 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:02:56.881779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:02:56.886616 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:02:56.888034 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:02:56.890476 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:02:56.890516 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:02:56.890538 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:02:56.904608 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:02:56.906651 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:02:56.931693 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (874) Dec 16 13:02:56.931754 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:56.935738 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:56.939905 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:02:56.939943 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:02:56.942628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:02:56.961941 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:02:56.968075 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:02:56.973520 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:02:56.978076 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:02:57.069337 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:02:57.072190 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:02:57.075035 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:02:57.108691 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:57.124889 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:02:57.148038 ignition[987]: INFO : Ignition 2.22.0 Dec 16 13:02:57.148038 ignition[987]: INFO : Stage: mount Dec 16 13:02:57.150596 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:57.150596 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:02:57.150596 ignition[987]: INFO : mount: mount passed Dec 16 13:02:57.150596 ignition[987]: INFO : Ignition finished successfully Dec 16 13:02:57.158977 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:02:57.162213 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:02:57.388150 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:02:57.390283 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:02:57.422634 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1000) Dec 16 13:02:57.422690 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:02:57.422708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:02:57.427871 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:02:57.427893 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:02:57.429621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:02:57.464164 ignition[1017]: INFO : Ignition 2.22.0 Dec 16 13:02:57.464164 ignition[1017]: INFO : Stage: files Dec 16 13:02:57.467073 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:57.467073 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:02:57.467073 ignition[1017]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:02:57.467073 ignition[1017]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:02:57.467073 ignition[1017]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:02:57.477225 ignition[1017]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:02:57.477225 ignition[1017]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:02:57.477225 ignition[1017]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:02:57.477225 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:02:57.477225 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:02:57.469750 unknown[1017]: wrote ssh authorized keys file for user: core Dec 16 13:02:57.639590 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:02:57.642919 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:02:57.642919 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:02:57.650254 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:02:57.650254 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:02:57.650254 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:02:57.868852 systemd-networkd[842]: eth0: Gained IPv6LL Dec 16 13:02:57.942179 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 16 13:02:58.448289 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:02:58.448289 ignition[1017]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 16 13:02:58.454422 ignition[1017]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:02:58.480878 ignition[1017]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:02:58.480878 ignition[1017]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 16 13:02:58.480878 ignition[1017]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 13:02:58.507497 ignition[1017]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:02:58.518416 ignition[1017]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:02:58.521076 ignition[1017]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 13:02:58.521076 ignition[1017]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:02:58.521076 ignition[1017]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:02:58.521076 ignition[1017]: INFO : files: files passed Dec 16 13:02:58.521076 ignition[1017]: INFO : Ignition finished successfully Dec 16 13:02:58.527164 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:02:58.532081 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:02:58.534479 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:02:58.550782 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:02:58.550920 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:02:58.555811 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 13:02:58.558022 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:02:58.558022 initrd-setup-root-after-ignition[1048]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:02:58.564121 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:02:58.568596 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:02:58.569447 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:02:58.574066 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:02:58.634736 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:02:58.634903 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:02:58.636276 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:02:58.641185 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:02:58.644219 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:02:58.645156 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:02:58.683260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:02:58.688542 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:02:58.711762 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:02:58.712546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:02:58.716123 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:02:58.716649 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:02:58.716799 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:02:58.724949 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:02:58.726170 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:02:58.730457 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:02:58.733243 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:02:58.734053 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:02:58.742787 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:02:58.743542 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:02:58.747278 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:02:58.750023 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:02:58.750584 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:02:58.757309 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:02:58.760370 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:02:58.760479 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:02:58.765453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:02:58.768772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:02:58.769599 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:02:58.775410 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:02:58.776144 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:02:58.776282 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:02:58.782724 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:02:58.782922 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:02:58.783720 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:02:58.788235 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:02:58.794764 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:02:58.795544 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:02:58.800103 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:02:58.802630 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:02:58.802756 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:02:58.805483 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:02:58.805580 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:02:58.808284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:02:58.808437 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:02:58.811256 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:02:58.811374 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:02:58.815547 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:02:58.823695 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:02:58.826344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:02:58.826461 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:02:58.827348 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:02:58.827448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:02:58.838094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:02:58.838209 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:02:58.859278 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:02:58.877196 ignition[1072]: INFO : Ignition 2.22.0 Dec 16 13:02:58.877196 ignition[1072]: INFO : Stage: umount Dec 16 13:02:58.879833 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:02:58.879833 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:02:58.879833 ignition[1072]: INFO : umount: umount passed Dec 16 13:02:58.879833 ignition[1072]: INFO : Ignition finished successfully Dec 16 13:02:58.883060 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:02:58.883217 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:02:58.884532 systemd[1]: Stopped target network.target - Network. Dec 16 13:02:58.888174 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:02:58.888242 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:02:58.889017 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:02:58.889066 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:02:58.894170 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:02:58.894232 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:02:58.896753 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:02:58.896806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:02:58.897458 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:02:58.903231 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:02:58.920080 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:02:58.920235 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:02:58.926623 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:02:58.926996 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:02:58.927126 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:02:58.932844 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:02:58.933642 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:02:58.934415 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:02:58.934475 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:02:58.942750 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:02:58.944197 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:02:58.944253 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:02:58.947152 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:02:58.947202 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:02:58.954026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:02:58.954076 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:02:58.955206 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:02:58.955254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:02:58.962461 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:02:58.966895 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:02:58.966969 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:02:58.986820 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:02:58.987007 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:02:58.998597 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:02:58.998822 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:02:58.999929 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:02:58.999995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:02:59.006689 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:02:59.006910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:02:59.007852 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:02:59.007954 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:02:59.010634 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:02:59.010708 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:02:59.014013 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:02:59.014080 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:02:59.019980 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:02:59.020042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:02:59.026034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:02:59.026095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:02:59.031933 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:02:59.034065 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:02:59.034126 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:02:59.037712 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:02:59.037766 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:02:59.043076 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:02:59.043130 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:02:59.048324 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:02:59.048377 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:02:59.052463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:02:59.052518 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:02:59.059509 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:02:59.059570 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:02:59.059615 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:02:59.059685 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:02:59.074418 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:02:59.074559 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:02:59.075312 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:02:59.080064 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:02:59.109258 systemd[1]: Switching root. Dec 16 13:02:59.147510 systemd-journald[202]: Journal stopped Dec 16 13:03:00.683997 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Dec 16 13:03:00.684071 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:03:00.684089 kernel: SELinux: policy capability open_perms=1 Dec 16 13:03:00.684100 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:03:00.684112 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:03:00.684126 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:03:00.684137 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:03:00.684149 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:03:00.684160 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:03:00.684171 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:03:00.684191 kernel: audit: type=1403 audit(1765890179.801:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:03:00.684208 systemd[1]: Successfully loaded SELinux policy in 58.688ms. Dec 16 13:03:00.684222 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.434ms. Dec 16 13:03:00.684236 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:00.684258 systemd[1]: Detected virtualization kvm. Dec 16 13:03:00.684269 systemd[1]: Detected architecture x86-64. Dec 16 13:03:00.684287 systemd[1]: Detected first boot. Dec 16 13:03:00.684299 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:03:00.684312 zram_generator::config[1119]: No configuration found. Dec 16 13:03:00.684328 kernel: Guest personality initialized and is inactive Dec 16 13:03:00.684340 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:03:00.684351 kernel: Initialized host personality Dec 16 13:03:00.684362 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:03:00.684374 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:03:00.684386 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:03:00.684399 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:03:00.684417 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:03:00.684432 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:03:00.684444 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:03:00.684457 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:03:00.684469 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:03:00.684482 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:03:00.684494 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:03:00.684507 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:03:00.684520 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:03:00.684533 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:03:00.684553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:00.684566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:00.684578 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:03:00.684591 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:03:00.684603 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:03:00.684616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:00.684629 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:03:00.684646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:00.684659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:00.684690 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:03:00.684702 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:03:00.684717 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:00.684729 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:03:00.684741 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:00.684754 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:00.684766 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:00.684777 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:00.684796 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:03:00.684809 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:03:00.684821 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:03:00.684833 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:00.684845 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:00.684858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:00.684870 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:03:00.684882 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:03:00.684894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:03:00.684916 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:03:00.684928 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:00.684941 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:03:00.684952 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:03:00.684965 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:03:00.684977 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:03:00.684991 systemd[1]: Reached target machines.target - Containers. Dec 16 13:03:00.685003 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:03:00.685021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:00.685033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:00.685045 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:03:00.685057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:00.685069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:00.685081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:00.685093 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:03:00.685104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:00.685117 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:03:00.685134 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:03:00.685146 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:03:00.685158 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:03:00.685170 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:03:00.685183 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:00.685195 kernel: fuse: init (API version 7.41) Dec 16 13:03:00.685206 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:00.685218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:00.685235 kernel: ACPI: bus type drm_connector registered Dec 16 13:03:00.685254 kernel: loop: module loaded Dec 16 13:03:00.685268 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:00.685286 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:03:00.685299 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:03:00.685311 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:00.685328 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:03:00.685361 systemd-journald[1205]: Collecting audit messages is disabled. Dec 16 13:03:00.685384 systemd[1]: Stopped verity-setup.service. Dec 16 13:03:00.685403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:00.685415 systemd-journald[1205]: Journal started Dec 16 13:03:00.685437 systemd-journald[1205]: Runtime Journal (/run/log/journal/4e43485b28ea4727ac9ce0aa6011bc1f) is 6M, max 48.1M, 42.1M free. Dec 16 13:03:00.346519 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:03:00.372717 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 13:03:00.373191 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:03:00.693388 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:00.694177 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:03:00.696031 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:03:00.698002 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:03:00.699956 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:03:00.702123 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:03:00.704016 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:03:00.705892 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:03:00.708051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:00.710361 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:03:00.710626 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:03:00.712863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:00.713126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:00.715379 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:00.715607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:00.717585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:00.717827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:00.720018 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:03:00.720237 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:03:00.722257 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:00.722491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:00.724543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:00.726801 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:00.729084 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:03:00.731388 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:03:00.745968 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:00.749387 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:03:00.752206 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:03:00.753974 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:03:00.754073 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:00.756781 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:03:00.767779 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:03:00.769497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:00.770743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:03:00.775371 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:03:00.777269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:00.785769 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:03:00.790570 systemd-journald[1205]: Time spent on flushing to /var/log/journal/4e43485b28ea4727ac9ce0aa6011bc1f is 30.459ms for 1056 entries. Dec 16 13:03:00.790570 systemd-journald[1205]: System Journal (/var/log/journal/4e43485b28ea4727ac9ce0aa6011bc1f) is 8M, max 195.6M, 187.6M free. Dec 16 13:03:00.856686 systemd-journald[1205]: Received client request to flush runtime journal. Dec 16 13:03:00.856749 kernel: loop0: detected capacity change from 0 to 110984 Dec 16 13:03:00.788786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:00.789895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:00.794436 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:03:00.813014 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:03:00.823552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:00.827756 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:03:00.829967 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:03:00.844993 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:03:00.849922 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:03:00.858720 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:03:00.861183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:03:00.867685 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:03:00.872079 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Dec 16 13:03:00.872095 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Dec 16 13:03:00.873821 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:00.884708 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:00.888333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:03:00.898696 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:03:00.914539 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:03:00.928480 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:03:00.932194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:00.933695 kernel: loop2: detected capacity change from 0 to 219144 Dec 16 13:03:00.956159 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Dec 16 13:03:00.956180 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Dec 16 13:03:00.961342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:00.962686 kernel: loop3: detected capacity change from 0 to 110984 Dec 16 13:03:00.978693 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 13:03:00.991735 kernel: loop5: detected capacity change from 0 to 219144 Dec 16 13:03:01.006588 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 13:03:01.007339 (sd-merge)[1263]: Merged extensions into '/usr'. Dec 16 13:03:01.019942 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:03:01.019956 systemd[1]: Reloading... Dec 16 13:03:01.094721 zram_generator::config[1290]: No configuration found. Dec 16 13:03:01.239777 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:03:01.341238 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:03:01.341854 systemd[1]: Reloading finished in 321 ms. Dec 16 13:03:01.375613 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:03:01.378056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:03:01.411586 systemd[1]: Starting ensure-sysext.service... Dec 16 13:03:01.414332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:01.424613 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:03:01.435737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:01.437909 systemd[1]: Reload requested from client PID 1328 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:03:01.438083 systemd[1]: Reloading... Dec 16 13:03:01.443000 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:03:01.443147 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:03:01.443496 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:03:01.443788 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:03:01.444732 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:03:01.445003 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Dec 16 13:03:01.445077 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Dec 16 13:03:01.449765 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:01.449778 systemd-tmpfiles[1329]: Skipping /boot Dec 16 13:03:01.462335 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:01.462353 systemd-tmpfiles[1329]: Skipping /boot Dec 16 13:03:01.495750 zram_generator::config[1357]: No configuration found. Dec 16 13:03:01.502350 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Dec 16 13:03:01.699737 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:03:01.704701 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:03:01.705683 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:03:01.737757 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 13:03:01.738072 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:03:01.741684 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:03:01.776307 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:03:01.776505 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:03:01.779130 systemd[1]: Reloading finished in 340 ms. Dec 16 13:03:01.789455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:01.794010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:01.862652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:01.867373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:03:01.874610 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:03:01.876992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:01.892199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:01.898152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:01.902656 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:01.903519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:01.910276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:03:01.912373 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:01.915346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:03:01.919316 kernel: kvm_amd: TSC scaling supported Dec 16 13:03:01.919381 kernel: kvm_amd: Nested Virtualization enabled Dec 16 13:03:01.919399 kernel: kvm_amd: Nested Paging enabled Dec 16 13:03:01.919421 kernel: kvm_amd: LBR virtualization supported Dec 16 13:03:01.925702 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 13:03:01.925738 kernel: kvm_amd: Virtual GIF supported Dec 16 13:03:01.941411 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:01.946396 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:01.950140 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:03:01.955788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:01.958744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:01.968865 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:03:01.970739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:01.971038 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:01.972552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:01.972881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:01.975477 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:01.975807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:01.976347 augenrules[1478]: No rules Dec 16 13:03:01.978426 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:03:01.979959 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:03:01.983010 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:03:01.991898 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:03:01.996733 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:03:02.012937 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:03:02.016989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:02.018230 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:03:02.019450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:02.020820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:02.023899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:02.031950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:02.035941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:02.036698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:02.036748 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:02.037927 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:03:02.041821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:03:02.043408 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:03:02.043443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:02.044473 systemd[1]: Finished ensure-sysext.service. Dec 16 13:03:02.046135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:02.048426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:02.048651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:02.050898 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:02.051114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:02.053815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:02.054070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:02.056187 augenrules[1495]: /sbin/augenrules: No change Dec 16 13:03:02.056716 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:02.056938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:02.060316 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:03:02.068868 augenrules[1523]: No rules Dec 16 13:03:02.068992 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:02.069061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:02.071231 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:03:02.073355 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:03:02.073758 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:03:02.095942 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:03:02.168957 systemd-networkd[1463]: lo: Link UP Dec 16 13:03:02.168969 systemd-networkd[1463]: lo: Gained carrier Dec 16 13:03:02.170681 systemd-networkd[1463]: Enumeration completed Dec 16 13:03:02.170782 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:02.171912 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:03:02.171926 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:02.172834 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:03:02.173612 systemd-networkd[1463]: eth0: Link UP Dec 16 13:03:02.173811 systemd-networkd[1463]: eth0: Gained carrier Dec 16 13:03:02.173833 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:03:02.175022 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:03:02.177900 systemd-resolved[1470]: Positive Trust Anchors: Dec 16 13:03:02.177914 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:03:02.177944 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:03:02.178450 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:03:02.181392 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:03:02.181634 systemd-resolved[1470]: Defaulting to hostname 'linux'. Dec 16 13:03:02.183529 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:03:02.185541 systemd[1]: Reached target network.target - Network. Dec 16 13:03:02.186976 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:02.189093 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:03:02.190879 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:03:02.192968 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:03:02.193738 systemd-networkd[1463]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:03:02.194433 systemd-timesyncd[1531]: Network configuration changed, trying to establish connection. Dec 16 13:03:02.194952 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:03:03.591240 systemd-resolved[1470]: Clock change detected. Flushing caches. Dec 16 13:03:03.591358 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 13:03:03.591472 systemd-timesyncd[1531]: Initial clock synchronization to Tue 2025-12-16 13:03:03.591190 UTC. Dec 16 13:03:03.592179 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:03:03.593982 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:03:03.595983 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:03:03.597968 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:03:03.598002 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:03:03.599443 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:03:03.602018 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:03:03.605600 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:03:03.609363 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:03:03.611807 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:03:03.613869 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:03:03.621631 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:03:03.623962 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:03:03.626927 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:03:03.629945 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:03:03.631517 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:03:03.633265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:03:03.633882 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:03:03.635388 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:03:03.638316 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:03:03.640881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:03:03.645779 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:03:03.649106 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:03:03.650743 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:03:03.651860 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:03:03.655325 jq[1550]: false Dec 16 13:03:03.655807 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:03:03.658494 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:03:03.663170 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:03:03.669580 extend-filesystems[1551]: Found /dev/vda6 Dec 16 13:03:03.669799 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:03:03.672851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:03:03.673363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:03:03.674030 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:03:03.674898 oslogin_cache_refresh[1552]: Refreshing passwd entry cache Dec 16 13:03:03.675988 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache Dec 16 13:03:03.678029 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:03:03.680351 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:03:03.682895 extend-filesystems[1551]: Found /dev/vda9 Dec 16 13:03:03.685283 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting Dec 16 13:03:03.685283 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:03:03.685283 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache Dec 16 13:03:03.684647 oslogin_cache_refresh[1552]: Failure getting users, quitting Dec 16 13:03:03.684674 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:03:03.684768 oslogin_cache_refresh[1552]: Refreshing group entry cache Dec 16 13:03:03.687041 extend-filesystems[1551]: Checking size of /dev/vda9 Dec 16 13:03:03.689720 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:03:03.691807 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting Dec 16 13:03:03.691807 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:03:03.691799 oslogin_cache_refresh[1552]: Failure getting groups, quitting Dec 16 13:03:03.691814 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:03:03.692268 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:03:03.692537 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:03:03.693427 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:03:03.693851 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:03:03.696825 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:03:03.702743 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:03:03.704815 extend-filesystems[1551]: Resized partition /dev/vda9 Dec 16 13:03:03.705129 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:03:03.706158 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:03:03.706938 jq[1566]: true Dec 16 13:03:03.710105 update_engine[1565]: I20251216 13:03:03.709640 1565 main.cc:92] Flatcar Update Engine starting Dec 16 13:03:03.717725 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:03:03.721678 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:03:03.728862 jq[1580]: true Dec 16 13:03:03.735067 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 13:03:03.811721 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 13:03:03.818140 dbus-daemon[1548]: [system] SELinux support is enabled Dec 16 13:03:03.837938 update_engine[1565]: I20251216 13:03:03.825124 1565 update_check_scheduler.cc:74] Next update check in 11m47s Dec 16 13:03:03.818615 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:03:03.823241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:03:03.823265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:03:03.826977 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:03:03.826991 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:03:03.830289 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:03:03.834865 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:03:03.838583 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:03:03.838613 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:03:03.839346 systemd-logind[1560]: New seat seat0. Dec 16 13:03:03.840130 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 13:03:03.840130 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 13:03:03.840130 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 13:03:03.846980 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:03:03.850344 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Dec 16 13:03:03.852538 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:03:03.852866 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:03:03.860513 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:03:03.863016 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:03:03.866613 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:03:03.893746 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:03:03.994302 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:03:04.025481 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:03:04.029071 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:03:04.049242 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:03:04.049557 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:03:04.053193 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:03:04.067269 containerd[1583]: time="2025-12-16T13:03:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:03:04.067982 containerd[1583]: time="2025-12-16T13:03:04.067944328Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:03:04.076356 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:03:04.080053 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:03:04.080320 containerd[1583]: time="2025-12-16T13:03:04.080261832Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.419µs" Dec 16 13:03:04.080320 containerd[1583]: time="2025-12-16T13:03:04.080309060Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:03:04.080388 containerd[1583]: time="2025-12-16T13:03:04.080330190Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:03:04.080611 containerd[1583]: time="2025-12-16T13:03:04.080582974Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:03:04.080611 containerd[1583]: time="2025-12-16T13:03:04.080603973Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:03:04.080651 containerd[1583]: time="2025-12-16T13:03:04.080631335Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:03:04.080764 containerd[1583]: time="2025-12-16T13:03:04.080734689Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:03:04.080764 containerd[1583]: time="2025-12-16T13:03:04.080753033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081128 containerd[1583]: time="2025-12-16T13:03:04.081096107Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081128 containerd[1583]: time="2025-12-16T13:03:04.081117517Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081174 containerd[1583]: time="2025-12-16T13:03:04.081129379Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081174 containerd[1583]: time="2025-12-16T13:03:04.081138606Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081303 containerd[1583]: time="2025-12-16T13:03:04.081274952Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081560 containerd[1583]: time="2025-12-16T13:03:04.081538947Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081583 containerd[1583]: time="2025-12-16T13:03:04.081572520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:03:04.081605 containerd[1583]: time="2025-12-16T13:03:04.081583441Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:03:04.081647 containerd[1583]: time="2025-12-16T13:03:04.081629717Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:03:04.081964 containerd[1583]: time="2025-12-16T13:03:04.081938386Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:03:04.082061 containerd[1583]: time="2025-12-16T13:03:04.082023576Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:03:04.083239 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:03:04.085275 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:03:04.090710 containerd[1583]: time="2025-12-16T13:03:04.090643435Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:03:04.090780 containerd[1583]: time="2025-12-16T13:03:04.090751748Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:03:04.090807 containerd[1583]: time="2025-12-16T13:03:04.090778508Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:03:04.090807 containerd[1583]: time="2025-12-16T13:03:04.090795380Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:03:04.090868 containerd[1583]: time="2025-12-16T13:03:04.090810748Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:03:04.091391 containerd[1583]: time="2025-12-16T13:03:04.091276402Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:03:04.091391 containerd[1583]: time="2025-12-16T13:03:04.091345572Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:03:04.091483 containerd[1583]: time="2025-12-16T13:03:04.091371190Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:03:04.091625 containerd[1583]: time="2025-12-16T13:03:04.091530458Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:03:04.091625 containerd[1583]: time="2025-12-16T13:03:04.091576334Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:03:04.091625 containerd[1583]: time="2025-12-16T13:03:04.091592885Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:03:04.091729 containerd[1583]: time="2025-12-16T13:03:04.091610448Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:03:04.092229 containerd[1583]: time="2025-12-16T13:03:04.092178093Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:03:04.092263 containerd[1583]: time="2025-12-16T13:03:04.092235811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:03:04.092290 containerd[1583]: time="2025-12-16T13:03:04.092263884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:03:04.092290 containerd[1583]: time="2025-12-16T13:03:04.092281116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:03:04.092333 containerd[1583]: time="2025-12-16T13:03:04.092297287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:03:04.092333 containerd[1583]: time="2025-12-16T13:03:04.092312976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:03:04.092376 containerd[1583]: time="2025-12-16T13:03:04.092330860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:03:04.092376 containerd[1583]: time="2025-12-16T13:03:04.092346679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:03:04.092376 containerd[1583]: time="2025-12-16T13:03:04.092360515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:03:04.092440 containerd[1583]: time="2025-12-16T13:03:04.092375654Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:03:04.092440 containerd[1583]: time="2025-12-16T13:03:04.092392796Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:03:04.092478 containerd[1583]: time="2025-12-16T13:03:04.092456565Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:03:04.092499 containerd[1583]: time="2025-12-16T13:03:04.092477765Z" level=info msg="Start snapshots syncer" Dec 16 13:03:04.092561 containerd[1583]: time="2025-12-16T13:03:04.092513342Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:03:04.092993 containerd[1583]: time="2025-12-16T13:03:04.092922309Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:03:04.093168 containerd[1583]: time="2025-12-16T13:03:04.093001037Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:03:04.093168 containerd[1583]: time="2025-12-16T13:03:04.093069455Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:03:04.093280 containerd[1583]: time="2025-12-16T13:03:04.093253310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:03:04.093328 containerd[1583]: time="2025-12-16T13:03:04.093289718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:03:04.093328 containerd[1583]: time="2025-12-16T13:03:04.093306730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:03:04.093328 containerd[1583]: time="2025-12-16T13:03:04.093321638Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:03:04.093416 containerd[1583]: time="2025-12-16T13:03:04.093340413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:03:04.093416 containerd[1583]: time="2025-12-16T13:03:04.093357916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:03:04.093416 containerd[1583]: time="2025-12-16T13:03:04.093373465Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:03:04.093416 containerd[1583]: time="2025-12-16T13:03:04.093405606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:03:04.093510 containerd[1583]: time="2025-12-16T13:03:04.093422547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:03:04.093510 containerd[1583]: time="2025-12-16T13:03:04.093438267Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:03:04.093510 containerd[1583]: time="2025-12-16T13:03:04.093490385Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:03:04.093598 containerd[1583]: time="2025-12-16T13:03:04.093511043Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:03:04.093598 containerd[1583]: time="2025-12-16T13:03:04.093524759Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:03:04.093598 containerd[1583]: time="2025-12-16T13:03:04.093538916Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:03:04.093598 containerd[1583]: time="2025-12-16T13:03:04.093552541Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:03:04.093598 containerd[1583]: time="2025-12-16T13:03:04.093565696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:03:04.093598 containerd[1583]: time="2025-12-16T13:03:04.093592997Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:03:04.093776 containerd[1583]: time="2025-12-16T13:03:04.093616481Z" level=info msg="runtime interface created" Dec 16 13:03:04.093776 containerd[1583]: time="2025-12-16T13:03:04.093627171Z" level=info msg="created NRI interface" Dec 16 13:03:04.093776 containerd[1583]: time="2025-12-16T13:03:04.093639174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:03:04.093776 containerd[1583]: time="2025-12-16T13:03:04.093652839Z" level=info msg="Connect containerd service" Dec 16 13:03:04.093776 containerd[1583]: time="2025-12-16T13:03:04.093677305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:03:04.094936 containerd[1583]: time="2025-12-16T13:03:04.094875934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:03:04.243146 containerd[1583]: time="2025-12-16T13:03:04.243063742Z" level=info msg="Start subscribing containerd event" Dec 16 13:03:04.243146 containerd[1583]: time="2025-12-16T13:03:04.243139944Z" level=info msg="Start recovering state" Dec 16 13:03:04.243335 containerd[1583]: time="2025-12-16T13:03:04.243291819Z" level=info msg="Start event monitor" Dec 16 13:03:04.243335 containerd[1583]: time="2025-12-16T13:03:04.243306377Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:03:04.243335 containerd[1583]: time="2025-12-16T13:03:04.243314522Z" level=info msg="Start streaming server" Dec 16 13:03:04.243335 containerd[1583]: time="2025-12-16T13:03:04.243333928Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:03:04.243451 containerd[1583]: time="2025-12-16T13:03:04.243342334Z" level=info msg="runtime interface starting up..." Dec 16 13:03:04.243451 containerd[1583]: time="2025-12-16T13:03:04.243352303Z" level=info msg="starting plugins..." Dec 16 13:03:04.243451 containerd[1583]: time="2025-12-16T13:03:04.243369104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:03:04.243451 containerd[1583]: time="2025-12-16T13:03:04.243366740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:03:04.243451 containerd[1583]: time="2025-12-16T13:03:04.243438524Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:03:04.243615 containerd[1583]: time="2025-12-16T13:03:04.243585871Z" level=info msg="containerd successfully booted in 0.177424s" Dec 16 13:03:04.243758 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:03:05.535912 systemd-networkd[1463]: eth0: Gained IPv6LL Dec 16 13:03:05.539054 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:03:05.542295 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:03:05.546151 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 13:03:05.549868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:03:05.595367 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:03:05.657441 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 13:03:05.657777 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 13:03:05.660432 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:03:05.661003 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:03:06.729620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:03:06.732449 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:03:06.734855 systemd[1]: Startup finished in 3.440s (kernel) + 6.131s (initrd) + 5.594s (userspace) = 15.166s. Dec 16 13:03:06.751187 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:03:07.355845 kubelet[1680]: E1216 13:03:07.355776 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:03:07.360026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:03:07.360230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:03:07.360599 systemd[1]: kubelet.service: Consumed 1.578s CPU time, 256.2M memory peak. Dec 16 13:03:08.371287 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:03:08.372554 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:56480.service - OpenSSH per-connection server daemon (10.0.0.1:56480). Dec 16 13:03:08.451764 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 56480 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:03:08.453738 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:03:08.460508 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:03:08.461796 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:03:08.468082 systemd-logind[1560]: New session 1 of user core. Dec 16 13:03:08.483038 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:03:08.486281 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:03:08.510191 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:03:08.512813 systemd-logind[1560]: New session c1 of user core. Dec 16 13:03:08.660489 systemd[1698]: Queued start job for default target default.target. Dec 16 13:03:08.682979 systemd[1698]: Created slice app.slice - User Application Slice. Dec 16 13:03:08.683004 systemd[1698]: Reached target paths.target - Paths. Dec 16 13:03:08.683045 systemd[1698]: Reached target timers.target - Timers. Dec 16 13:03:08.684636 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:03:08.697452 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:03:08.697581 systemd[1698]: Reached target sockets.target - Sockets. Dec 16 13:03:08.697625 systemd[1698]: Reached target basic.target - Basic System. Dec 16 13:03:08.697664 systemd[1698]: Reached target default.target - Main User Target. Dec 16 13:03:08.697715 systemd[1698]: Startup finished in 177ms. Dec 16 13:03:08.698060 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:03:08.699659 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:03:08.760994 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:56494.service - OpenSSH per-connection server daemon (10.0.0.1:56494). Dec 16 13:03:08.814415 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 56494 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:03:08.815946 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:03:08.820932 systemd-logind[1560]: New session 2 of user core. Dec 16 13:03:08.834998 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:03:08.890406 sshd[1712]: Connection closed by 10.0.0.1 port 56494 Dec 16 13:03:08.890859 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Dec 16 13:03:08.903502 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:56494.service: Deactivated successfully. Dec 16 13:03:08.905526 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:03:08.906290 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:03:08.909157 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:56496.service - OpenSSH per-connection server daemon (10.0.0.1:56496). Dec 16 13:03:08.909882 systemd-logind[1560]: Removed session 2. Dec 16 13:03:08.960189 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 56496 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:03:08.961834 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:03:08.966372 systemd-logind[1560]: New session 3 of user core. Dec 16 13:03:08.973843 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:03:09.023047 sshd[1721]: Connection closed by 10.0.0.1 port 56496 Dec 16 13:03:09.023472 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Dec 16 13:03:09.031482 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:56496.service: Deactivated successfully. Dec 16 13:03:09.033365 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:03:09.034096 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:03:09.037040 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:56510.service - OpenSSH per-connection server daemon (10.0.0.1:56510). Dec 16 13:03:09.037579 systemd-logind[1560]: Removed session 3. Dec 16 13:03:09.094256 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56510 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:03:09.095884 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:03:09.100482 systemd-logind[1560]: New session 4 of user core. Dec 16 13:03:09.109839 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:03:09.162867 sshd[1730]: Connection closed by 10.0.0.1 port 56510 Dec 16 13:03:09.163463 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Dec 16 13:03:09.176644 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:56510.service: Deactivated successfully. Dec 16 13:03:09.178709 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:03:09.179433 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:03:09.182267 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:56514.service - OpenSSH per-connection server daemon (10.0.0.1:56514). Dec 16 13:03:09.182945 systemd-logind[1560]: Removed session 4. Dec 16 13:03:09.235899 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 56514 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:03:09.237408 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:03:09.241887 systemd-logind[1560]: New session 5 of user core. Dec 16 13:03:09.258877 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:03:09.319044 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:03:09.319412 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:03:09.336676 sudo[1740]: pam_unix(sudo:session): session closed for user root Dec 16 13:03:09.338785 sshd[1739]: Connection closed by 10.0.0.1 port 56514 Dec 16 13:03:09.339312 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Dec 16 13:03:09.361524 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:56514.service: Deactivated successfully. Dec 16 13:03:09.363835 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:03:09.364716 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:03:09.367983 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:56520.service - OpenSSH per-connection server daemon (10.0.0.1:56520). Dec 16 13:03:09.368531 systemd-logind[1560]: Removed session 5. Dec 16 13:03:09.429327 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 56520 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:03:09.431244 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:03:09.437389 systemd-logind[1560]: New session 6 of user core. Dec 16 13:03:09.447997 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:03:09.503100 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:03:09.503505 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:03:09.510637 sudo[1751]: pam_unix(sudo:session): session closed for user root Dec 16 13:03:09.519497 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:03:09.519909 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:03:09.531722 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:03:09.579020 augenrules[1773]: No rules Dec 16 13:03:09.580775 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:03:09.581153 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:03:09.582441 sudo[1750]: pam_unix(sudo:session): session closed for user root Dec 16 13:03:09.584169 sshd[1749]: Connection closed by 10.0.0.1 port 56520 Dec 16 13:03:09.584600 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Dec 16 13:03:09.594005 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:56520.service: Deactivated successfully. Dec 16 13:03:09.596055 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:03:09.596965 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:03:09.600142 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:56532.service - OpenSSH per-connection server daemon (10.0.0.1:56532). Dec 16 13:03:09.600805 systemd-logind[1560]: Removed session 6. Dec 16 13:03:09.657720 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 56532 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:03:09.659540 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:03:09.664272 systemd-logind[1560]: New session 7 of user core. Dec 16 13:03:09.678847 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:03:09.733011 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:03:09.733355 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:03:09.747849 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 13:03:09.793104 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 13:03:09.793398 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 13:03:10.545530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:03:10.545703 systemd[1]: kubelet.service: Consumed 1.578s CPU time, 256.2M memory peak. Dec 16 13:03:10.547982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:03:10.576352 systemd[1]: Reload requested from client PID 1829 ('systemctl') (unit session-7.scope)... Dec 16 13:03:10.576366 systemd[1]: Reloading... Dec 16 13:03:10.693751 zram_generator::config[1874]: No configuration found. Dec 16 13:03:11.040621 systemd[1]: Reloading finished in 463 ms. Dec 16 13:03:11.115654 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:03:11.115792 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:03:11.116123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:03:11.116172 systemd[1]: kubelet.service: Consumed 205ms CPU time, 98.2M memory peak. Dec 16 13:03:11.117867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:03:11.305578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:03:11.310520 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:03:11.354978 kubelet[1919]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:03:11.354978 kubelet[1919]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:03:11.355377 kubelet[1919]: I1216 13:03:11.355011 1919 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:03:12.127497 kubelet[1919]: I1216 13:03:12.127436 1919 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:03:12.127497 kubelet[1919]: I1216 13:03:12.127468 1919 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:03:12.127497 kubelet[1919]: I1216 13:03:12.127497 1919 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:03:12.127497 kubelet[1919]: I1216 13:03:12.127504 1919 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:03:12.127762 kubelet[1919]: I1216 13:03:12.127754 1919 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:03:12.130544 kubelet[1919]: I1216 13:03:12.130509 1919 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:03:12.136329 kubelet[1919]: I1216 13:03:12.136157 1919 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:03:12.141894 kubelet[1919]: I1216 13:03:12.141860 1919 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:03:12.142811 kubelet[1919]: I1216 13:03:12.142767 1919 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:03:12.143043 kubelet[1919]: I1216 13:03:12.142803 1919 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:03:12.143043 kubelet[1919]: I1216 13:03:12.143038 1919 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:03:12.143161 kubelet[1919]: I1216 13:03:12.143049 1919 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:03:12.143161 kubelet[1919]: I1216 13:03:12.143155 1919 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:03:13.004161 kubelet[1919]: I1216 13:03:13.004115 1919 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:03:13.004786 kubelet[1919]: I1216 13:03:13.004395 1919 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:03:13.004786 kubelet[1919]: I1216 13:03:13.004412 1919 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:03:13.004786 kubelet[1919]: I1216 13:03:13.004442 1919 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:03:13.004786 kubelet[1919]: I1216 13:03:13.004478 1919 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:03:13.004786 kubelet[1919]: E1216 13:03:13.004560 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:13.005069 kubelet[1919]: E1216 13:03:13.005051 1919 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:13.008229 kubelet[1919]: I1216 13:03:13.008178 1919 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:03:13.008837 kubelet[1919]: I1216 13:03:13.008803 1919 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:03:13.008837 kubelet[1919]: I1216 13:03:13.008834 1919 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:03:13.008921 kubelet[1919]: W1216 13:03:13.008901 1919 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:03:13.009096 kubelet[1919]: E1216 13:03:13.009059 1919 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.61\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:03:13.009195 kubelet[1919]: E1216 13:03:13.009137 1919 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:03:13.013926 kubelet[1919]: I1216 13:03:13.013039 1919 server.go:1262] "Started kubelet" Dec 16 13:03:13.013926 kubelet[1919]: I1216 13:03:13.013129 1919 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:03:13.013926 kubelet[1919]: I1216 13:03:13.013384 1919 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:03:13.013926 kubelet[1919]: I1216 13:03:13.013440 1919 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:03:13.013926 kubelet[1919]: I1216 13:03:13.013772 1919 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:03:13.014349 kubelet[1919]: I1216 13:03:13.014126 1919 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:03:13.014349 kubelet[1919]: I1216 13:03:13.014232 1919 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:03:13.015142 kubelet[1919]: I1216 13:03:13.015114 1919 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:03:13.017708 kubelet[1919]: E1216 13:03:13.017657 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.017767 kubelet[1919]: I1216 13:03:13.017746 1919 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:03:13.017882 kubelet[1919]: I1216 13:03:13.017860 1919 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:03:13.017923 kubelet[1919]: I1216 13:03:13.017914 1919 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:03:13.018984 kubelet[1919]: E1216 13:03:13.018948 1919 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:03:13.019225 kubelet[1919]: I1216 13:03:13.019201 1919 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:03:13.019317 kubelet[1919]: I1216 13:03:13.019296 1919 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:03:13.022178 kubelet[1919]: I1216 13:03:13.022152 1919 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:03:13.047518 kubelet[1919]: E1216 13:03:13.046606 1919 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:03:13.049712 kubelet[1919]: E1216 13:03:13.047841 1919 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.61\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 16 13:03:13.056569 kubelet[1919]: E1216 13:03:13.046663 1919 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.61.1881b3c642e2e716 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.61,UID:10.0.0.61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.61,},FirstTimestamp:2025-12-16 13:03:13.01300815 +0000 UTC m=+1.698435321,LastTimestamp:2025-12-16 13:03:13.01300815 +0000 UTC m=+1.698435321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.61,}" Dec 16 13:03:13.059031 kubelet[1919]: E1216 13:03:13.058870 1919 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.61.1881b3c6433d4511 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.61,UID:10.0.0.61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.61,},FirstTimestamp:2025-12-16 13:03:13.018930449 +0000 UTC m=+1.704357609,LastTimestamp:2025-12-16 13:03:13.018930449 +0000 UTC m=+1.704357609,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.61,}" Dec 16 13:03:13.069954 kubelet[1919]: I1216 13:03:13.069914 1919 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:03:13.069954 kubelet[1919]: I1216 13:03:13.069942 1919 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:03:13.070103 kubelet[1919]: I1216 13:03:13.069980 1919 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:03:13.075742 kubelet[1919]: I1216 13:03:13.075706 1919 policy_none.go:49] "None policy: Start" Dec 16 13:03:13.075742 kubelet[1919]: I1216 13:03:13.075739 1919 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:03:13.075846 kubelet[1919]: I1216 13:03:13.075754 1919 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:03:13.078633 kubelet[1919]: I1216 13:03:13.077810 1919 policy_none.go:47] "Start" Dec 16 13:03:13.084254 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:03:13.100321 kubelet[1919]: I1216 13:03:13.100274 1919 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:03:13.101270 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:03:13.102777 kubelet[1919]: I1216 13:03:13.102757 1919 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:03:13.102860 kubelet[1919]: I1216 13:03:13.102848 1919 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:03:13.102949 kubelet[1919]: I1216 13:03:13.102935 1919 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:03:13.103120 kubelet[1919]: E1216 13:03:13.103093 1919 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:03:13.111704 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:03:13.113647 kubelet[1919]: E1216 13:03:13.113605 1919 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:03:13.113920 kubelet[1919]: I1216 13:03:13.113878 1919 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:03:13.113920 kubelet[1919]: I1216 13:03:13.113898 1919 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:03:13.114334 kubelet[1919]: I1216 13:03:13.114307 1919 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:03:13.115059 kubelet[1919]: E1216 13:03:13.115022 1919 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:03:13.115059 kubelet[1919]: E1216 13:03:13.115060 1919 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.61\" not found" Dec 16 13:03:13.129271 kubelet[1919]: I1216 13:03:13.129238 1919 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 16 13:03:13.129415 kubelet[1919]: I1216 13:03:13.129394 1919 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Dec 16 13:03:13.215635 kubelet[1919]: I1216 13:03:13.215580 1919 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.61" Dec 16 13:03:13.223091 kubelet[1919]: I1216 13:03:13.223066 1919 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.61" Dec 16 13:03:13.223091 kubelet[1919]: E1216 13:03:13.223090 1919 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"10.0.0.61\": node \"10.0.0.61\" not found" Dec 16 13:03:13.240483 kubelet[1919]: E1216 13:03:13.240433 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.320307 sudo[1786]: pam_unix(sudo:session): session closed for user root Dec 16 13:03:13.322230 sshd[1785]: Connection closed by 10.0.0.1 port 56532 Dec 16 13:03:13.322573 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Dec 16 13:03:13.325933 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:56532.service: Deactivated successfully. Dec 16 13:03:13.328326 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:03:13.328565 systemd[1]: session-7.scope: Consumed 593ms CPU time, 74.5M memory peak. Dec 16 13:03:13.331326 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:03:13.332935 systemd-logind[1560]: Removed session 7. Dec 16 13:03:13.340998 kubelet[1919]: E1216 13:03:13.340938 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.441731 kubelet[1919]: E1216 13:03:13.441615 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.541849 kubelet[1919]: E1216 13:03:13.541787 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.643019 kubelet[1919]: E1216 13:03:13.642761 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.743451 kubelet[1919]: E1216 13:03:13.743398 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.844215 kubelet[1919]: E1216 13:03:13.844147 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:13.944840 kubelet[1919]: E1216 13:03:13.944775 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:14.005567 kubelet[1919]: E1216 13:03:14.005504 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:14.045081 kubelet[1919]: E1216 13:03:14.045013 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:14.145810 kubelet[1919]: E1216 13:03:14.145748 1919 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" Dec 16 13:03:14.247203 kubelet[1919]: I1216 13:03:14.247061 1919 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 16 13:03:14.247422 containerd[1583]: time="2025-12-16T13:03:14.247362016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:03:14.248087 kubelet[1919]: I1216 13:03:14.247677 1919 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 16 13:03:15.006491 kubelet[1919]: I1216 13:03:15.006439 1919 apiserver.go:52] "Watching apiserver" Dec 16 13:03:15.006949 kubelet[1919]: E1216 13:03:15.006453 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:15.023016 systemd[1]: Created slice kubepods-besteffort-pode3cae56e_5b00_4720_98a1_42518ee14fd7.slice - libcontainer container kubepods-besteffort-pode3cae56e_5b00_4720_98a1_42518ee14fd7.slice. Dec 16 13:03:15.023887 kubelet[1919]: E1216 13:03:15.023754 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:15.028761 kubelet[1919]: I1216 13:03:15.027904 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlwf\" (UniqueName: \"kubernetes.io/projected/fde59053-f0c0-4b62-b3f3-900cee51cff8-kube-api-access-wqlwf\") pod \"csi-node-driver-5xz69\" (UID: \"fde59053-f0c0-4b62-b3f3-900cee51cff8\") " pod="calico-system/csi-node-driver-5xz69" Dec 16 13:03:15.028761 kubelet[1919]: I1216 13:03:15.027949 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-cni-bin-dir\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.028761 kubelet[1919]: I1216 13:03:15.027964 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-cni-log-dir\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.028761 kubelet[1919]: I1216 13:03:15.027983 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-flexvol-driver-host\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.028761 kubelet[1919]: I1216 13:03:15.028022 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cae56e-5b00-4720-98a1-42518ee14fd7-tigera-ca-bundle\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029053 kubelet[1919]: I1216 13:03:15.028132 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fde59053-f0c0-4b62-b3f3-900cee51cff8-registration-dir\") pod \"csi-node-driver-5xz69\" (UID: \"fde59053-f0c0-4b62-b3f3-900cee51cff8\") " pod="calico-system/csi-node-driver-5xz69" Dec 16 13:03:15.029053 kubelet[1919]: I1216 13:03:15.028176 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fde59053-f0c0-4b62-b3f3-900cee51cff8-varrun\") pod \"csi-node-driver-5xz69\" (UID: \"fde59053-f0c0-4b62-b3f3-900cee51cff8\") " pod="calico-system/csi-node-driver-5xz69" Dec 16 13:03:15.029053 kubelet[1919]: I1216 13:03:15.028194 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-var-lib-calico\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029053 kubelet[1919]: I1216 13:03:15.028209 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-var-run-calico\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029053 kubelet[1919]: I1216 13:03:15.028227 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4xf\" (UniqueName: \"kubernetes.io/projected/e3cae56e-5b00-4720-98a1-42518ee14fd7-kube-api-access-tt4xf\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029224 kubelet[1919]: I1216 13:03:15.028243 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-cni-net-dir\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029224 kubelet[1919]: I1216 13:03:15.028260 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e3cae56e-5b00-4720-98a1-42518ee14fd7-node-certs\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029224 kubelet[1919]: I1216 13:03:15.028275 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fde59053-f0c0-4b62-b3f3-900cee51cff8-kubelet-dir\") pod \"csi-node-driver-5xz69\" (UID: \"fde59053-f0c0-4b62-b3f3-900cee51cff8\") " pod="calico-system/csi-node-driver-5xz69" Dec 16 13:03:15.029224 kubelet[1919]: I1216 13:03:15.028316 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-lib-modules\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029224 kubelet[1919]: I1216 13:03:15.028370 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-policysync\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029393 kubelet[1919]: I1216 13:03:15.028396 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3cae56e-5b00-4720-98a1-42518ee14fd7-xtables-lock\") pod \"calico-node-mkcdg\" (UID: \"e3cae56e-5b00-4720-98a1-42518ee14fd7\") " pod="calico-system/calico-node-mkcdg" Dec 16 13:03:15.029393 kubelet[1919]: I1216 13:03:15.028426 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fde59053-f0c0-4b62-b3f3-900cee51cff8-socket-dir\") pod \"csi-node-driver-5xz69\" (UID: \"fde59053-f0c0-4b62-b3f3-900cee51cff8\") " pod="calico-system/csi-node-driver-5xz69" Dec 16 13:03:15.036373 systemd[1]: Created slice kubepods-besteffort-podb12bd70a_1061_4966_b9c0_115289e0a474.slice - libcontainer container kubepods-besteffort-podb12bd70a_1061_4966_b9c0_115289e0a474.slice. Dec 16 13:03:15.119178 kubelet[1919]: I1216 13:03:15.119138 1919 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:03:15.129360 kubelet[1919]: I1216 13:03:15.129284 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b12bd70a-1061-4966-b9c0-115289e0a474-kube-proxy\") pod \"kube-proxy-wdm79\" (UID: \"b12bd70a-1061-4966-b9c0-115289e0a474\") " pod="kube-system/kube-proxy-wdm79" Dec 16 13:03:15.129360 kubelet[1919]: I1216 13:03:15.129355 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8z4\" (UniqueName: \"kubernetes.io/projected/b12bd70a-1061-4966-b9c0-115289e0a474-kube-api-access-rg8z4\") pod \"kube-proxy-wdm79\" (UID: \"b12bd70a-1061-4966-b9c0-115289e0a474\") " pod="kube-system/kube-proxy-wdm79" Dec 16 13:03:15.129461 kubelet[1919]: I1216 13:03:15.129421 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b12bd70a-1061-4966-b9c0-115289e0a474-lib-modules\") pod \"kube-proxy-wdm79\" (UID: \"b12bd70a-1061-4966-b9c0-115289e0a474\") " pod="kube-system/kube-proxy-wdm79" Dec 16 13:03:15.129894 kubelet[1919]: I1216 13:03:15.129522 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b12bd70a-1061-4966-b9c0-115289e0a474-xtables-lock\") pod \"kube-proxy-wdm79\" (UID: \"b12bd70a-1061-4966-b9c0-115289e0a474\") " pod="kube-system/kube-proxy-wdm79" Dec 16 13:03:15.130602 kubelet[1919]: E1216 13:03:15.130567 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.130602 kubelet[1919]: W1216 13:03:15.130582 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.130602 kubelet[1919]: E1216 13:03:15.130597 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.134420 kubelet[1919]: E1216 13:03:15.134376 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.134420 kubelet[1919]: W1216 13:03:15.134411 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.134538 kubelet[1919]: E1216 13:03:15.134439 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.142674 kubelet[1919]: E1216 13:03:15.142479 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.142674 kubelet[1919]: W1216 13:03:15.142503 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.142674 kubelet[1919]: E1216 13:03:15.142526 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.143090 kubelet[1919]: E1216 13:03:15.143061 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.143090 kubelet[1919]: W1216 13:03:15.143076 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.143090 kubelet[1919]: E1216 13:03:15.143088 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.231111 kubelet[1919]: E1216 13:03:15.231065 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.231111 kubelet[1919]: W1216 13:03:15.231093 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.231111 kubelet[1919]: E1216 13:03:15.231116 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.231407 kubelet[1919]: E1216 13:03:15.231388 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.231407 kubelet[1919]: W1216 13:03:15.231403 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.231465 kubelet[1919]: E1216 13:03:15.231414 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.231714 kubelet[1919]: E1216 13:03:15.231675 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.231747 kubelet[1919]: W1216 13:03:15.231713 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.231747 kubelet[1919]: E1216 13:03:15.231727 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.231972 kubelet[1919]: E1216 13:03:15.231953 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.231972 kubelet[1919]: W1216 13:03:15.231967 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.232030 kubelet[1919]: E1216 13:03:15.231978 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.232250 kubelet[1919]: E1216 13:03:15.232230 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.232250 kubelet[1919]: W1216 13:03:15.232247 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.232318 kubelet[1919]: E1216 13:03:15.232261 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.232496 kubelet[1919]: E1216 13:03:15.232477 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.232496 kubelet[1919]: W1216 13:03:15.232491 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.232550 kubelet[1919]: E1216 13:03:15.232502 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.232745 kubelet[1919]: E1216 13:03:15.232727 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.232745 kubelet[1919]: W1216 13:03:15.232741 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.232796 kubelet[1919]: E1216 13:03:15.232752 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.232985 kubelet[1919]: E1216 13:03:15.232966 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.232985 kubelet[1919]: W1216 13:03:15.232981 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.233044 kubelet[1919]: E1216 13:03:15.232993 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.233208 kubelet[1919]: E1216 13:03:15.233190 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.233208 kubelet[1919]: W1216 13:03:15.233204 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.233257 kubelet[1919]: E1216 13:03:15.233214 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.233442 kubelet[1919]: E1216 13:03:15.233424 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.233442 kubelet[1919]: W1216 13:03:15.233437 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.233497 kubelet[1919]: E1216 13:03:15.233449 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.233725 kubelet[1919]: E1216 13:03:15.233708 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.233725 kubelet[1919]: W1216 13:03:15.233722 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.233774 kubelet[1919]: E1216 13:03:15.233733 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.233973 kubelet[1919]: E1216 13:03:15.233955 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.233973 kubelet[1919]: W1216 13:03:15.233968 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.234025 kubelet[1919]: E1216 13:03:15.233979 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.234185 kubelet[1919]: E1216 13:03:15.234168 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.234185 kubelet[1919]: W1216 13:03:15.234181 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.234232 kubelet[1919]: E1216 13:03:15.234191 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.234546 kubelet[1919]: E1216 13:03:15.234511 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.234578 kubelet[1919]: W1216 13:03:15.234542 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.234578 kubelet[1919]: E1216 13:03:15.234568 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.234784 kubelet[1919]: E1216 13:03:15.234771 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.234784 kubelet[1919]: W1216 13:03:15.234781 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.234846 kubelet[1919]: E1216 13:03:15.234789 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.235012 kubelet[1919]: E1216 13:03:15.234992 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.235012 kubelet[1919]: W1216 13:03:15.235002 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.235012 kubelet[1919]: E1216 13:03:15.235009 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.235237 kubelet[1919]: E1216 13:03:15.235225 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.235237 kubelet[1919]: W1216 13:03:15.235234 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.235287 kubelet[1919]: E1216 13:03:15.235243 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.235486 kubelet[1919]: E1216 13:03:15.235473 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.235486 kubelet[1919]: W1216 13:03:15.235482 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.235526 kubelet[1919]: E1216 13:03:15.235490 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.235809 kubelet[1919]: E1216 13:03:15.235789 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.235809 kubelet[1919]: W1216 13:03:15.235806 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.235867 kubelet[1919]: E1216 13:03:15.235817 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.236072 kubelet[1919]: E1216 13:03:15.236054 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.236072 kubelet[1919]: W1216 13:03:15.236068 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.236114 kubelet[1919]: E1216 13:03:15.236078 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.236348 kubelet[1919]: E1216 13:03:15.236313 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.236348 kubelet[1919]: W1216 13:03:15.236333 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.236348 kubelet[1919]: E1216 13:03:15.236345 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.245060 kubelet[1919]: E1216 13:03:15.245035 1919 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:03:15.245060 kubelet[1919]: W1216 13:03:15.245049 1919 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:03:15.245060 kubelet[1919]: E1216 13:03:15.245059 1919 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:03:15.340119 containerd[1583]: time="2025-12-16T13:03:15.339989142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mkcdg,Uid:e3cae56e-5b00-4720-98a1-42518ee14fd7,Namespace:calico-system,Attempt:0,}" Dec 16 13:03:15.343299 containerd[1583]: time="2025-12-16T13:03:15.343254817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdm79,Uid:b12bd70a-1061-4966-b9c0-115289e0a474,Namespace:kube-system,Attempt:0,}" Dec 16 13:03:15.967025 containerd[1583]: time="2025-12-16T13:03:15.966956964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:03:15.968941 containerd[1583]: time="2025-12-16T13:03:15.968844755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:03:15.971366 containerd[1583]: time="2025-12-16T13:03:15.971333964Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:03:15.974485 containerd[1583]: time="2025-12-16T13:03:15.974394845Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:03:15.976233 containerd[1583]: time="2025-12-16T13:03:15.976175756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:03:15.979375 containerd[1583]: time="2025-12-16T13:03:15.979212252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:03:15.980342 containerd[1583]: time="2025-12-16T13:03:15.980294311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 634.027835ms" Dec 16 13:03:15.981333 containerd[1583]: time="2025-12-16T13:03:15.981278387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.584543ms" Dec 16 13:03:16.006916 kubelet[1919]: E1216 13:03:16.006851 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:16.014778 containerd[1583]: time="2025-12-16T13:03:16.014721857Z" level=info msg="connecting to shim a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef" address="unix:///run/containerd/s/fa72413e6c7e90c6fe966dbec058dfcd54d9c2ec74c71660f6b491307ed3f593" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:16.018058 containerd[1583]: time="2025-12-16T13:03:16.018021497Z" level=info msg="connecting to shim 9fbd1a61e93adb75b83342be869ec76966baa1d4002416ba46fb8d3accbcaef5" address="unix:///run/containerd/s/a39e61366da046c2eae127e1b68af0dad4025652e847076e6fbe90e6b9d609e5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:16.047846 systemd[1]: Started cri-containerd-a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef.scope - libcontainer container a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef. Dec 16 13:03:16.051032 systemd[1]: Started cri-containerd-9fbd1a61e93adb75b83342be869ec76966baa1d4002416ba46fb8d3accbcaef5.scope - libcontainer container 9fbd1a61e93adb75b83342be869ec76966baa1d4002416ba46fb8d3accbcaef5. Dec 16 13:03:16.081021 containerd[1583]: time="2025-12-16T13:03:16.080973156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mkcdg,Uid:e3cae56e-5b00-4720-98a1-42518ee14fd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef\"" Dec 16 13:03:16.084902 containerd[1583]: time="2025-12-16T13:03:16.084849497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:03:16.085820 containerd[1583]: time="2025-12-16T13:03:16.085794509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdm79,Uid:b12bd70a-1061-4966-b9c0-115289e0a474,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fbd1a61e93adb75b83342be869ec76966baa1d4002416ba46fb8d3accbcaef5\"" Dec 16 13:03:16.140795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392906976.mount: Deactivated successfully. Dec 16 13:03:17.007561 kubelet[1919]: E1216 13:03:17.007524 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:17.103600 kubelet[1919]: E1216 13:03:17.103550 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:18.008317 kubelet[1919]: E1216 13:03:18.008234 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:18.208565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478619562.mount: Deactivated successfully. Dec 16 13:03:18.278932 containerd[1583]: time="2025-12-16T13:03:18.278792657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:18.279538 containerd[1583]: time="2025-12-16T13:03:18.279511966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Dec 16 13:03:18.280700 containerd[1583]: time="2025-12-16T13:03:18.280630264Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:18.285992 containerd[1583]: time="2025-12-16T13:03:18.285937419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:18.286417 containerd[1583]: time="2025-12-16T13:03:18.286376001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.201489856s" Dec 16 13:03:18.286417 containerd[1583]: time="2025-12-16T13:03:18.286409434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:03:18.287705 containerd[1583]: time="2025-12-16T13:03:18.287661533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:03:18.291756 containerd[1583]: time="2025-12-16T13:03:18.291712160Z" level=info msg="CreateContainer within sandbox \"a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:03:18.302001 containerd[1583]: time="2025-12-16T13:03:18.301939714Z" level=info msg="Container 71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:18.310745 containerd[1583]: time="2025-12-16T13:03:18.310680970Z" level=info msg="CreateContainer within sandbox \"a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef\"" Dec 16 13:03:18.311551 containerd[1583]: time="2025-12-16T13:03:18.311505457Z" level=info msg="StartContainer for \"71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef\"" Dec 16 13:03:18.313227 containerd[1583]: time="2025-12-16T13:03:18.313186700Z" level=info msg="connecting to shim 71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef" address="unix:///run/containerd/s/fa72413e6c7e90c6fe966dbec058dfcd54d9c2ec74c71660f6b491307ed3f593" protocol=ttrpc version=3 Dec 16 13:03:18.349932 systemd[1]: Started cri-containerd-71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef.scope - libcontainer container 71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef. Dec 16 13:03:18.453055 containerd[1583]: time="2025-12-16T13:03:18.453001713Z" level=info msg="StartContainer for \"71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef\" returns successfully" Dec 16 13:03:18.472361 systemd[1]: cri-containerd-71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef.scope: Deactivated successfully. Dec 16 13:03:18.476289 containerd[1583]: time="2025-12-16T13:03:18.476230504Z" level=info msg="received container exit event container_id:\"71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef\" id:\"71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef\" pid:2112 exited_at:{seconds:1765890198 nanos:475760492}" Dec 16 13:03:19.008636 kubelet[1919]: E1216 13:03:19.008570 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:19.104198 kubelet[1919]: E1216 13:03:19.104113 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:19.188429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71ace333e1c8fc86e71c90bfd65f9f1b2ed256297e08aed759fcf9557be983ef-rootfs.mount: Deactivated successfully. Dec 16 13:03:19.826762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418388459.mount: Deactivated successfully. Dec 16 13:03:20.008957 kubelet[1919]: E1216 13:03:20.008856 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:20.134571 containerd[1583]: time="2025-12-16T13:03:20.134426856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:20.135405 containerd[1583]: time="2025-12-16T13:03:20.135370637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 16 13:03:20.136619 containerd[1583]: time="2025-12-16T13:03:20.136553986Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:20.138448 containerd[1583]: time="2025-12-16T13:03:20.138409837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:20.139090 containerd[1583]: time="2025-12-16T13:03:20.139045570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.851350364s" Dec 16 13:03:20.139129 containerd[1583]: time="2025-12-16T13:03:20.139093009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:03:20.140936 containerd[1583]: time="2025-12-16T13:03:20.140417473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:03:20.144725 containerd[1583]: time="2025-12-16T13:03:20.144656524Z" level=info msg="CreateContainer within sandbox \"9fbd1a61e93adb75b83342be869ec76966baa1d4002416ba46fb8d3accbcaef5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:03:20.154480 containerd[1583]: time="2025-12-16T13:03:20.154431159Z" level=info msg="Container d09c1daa890ca8b6eee66ae83d2a1a9a10b319e027a917164e604e32a6845a35: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:20.168892 containerd[1583]: time="2025-12-16T13:03:20.168835547Z" level=info msg="CreateContainer within sandbox \"9fbd1a61e93adb75b83342be869ec76966baa1d4002416ba46fb8d3accbcaef5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d09c1daa890ca8b6eee66ae83d2a1a9a10b319e027a917164e604e32a6845a35\"" Dec 16 13:03:20.169560 containerd[1583]: time="2025-12-16T13:03:20.169486799Z" level=info msg="StartContainer for \"d09c1daa890ca8b6eee66ae83d2a1a9a10b319e027a917164e604e32a6845a35\"" Dec 16 13:03:20.171322 containerd[1583]: time="2025-12-16T13:03:20.171283970Z" level=info msg="connecting to shim d09c1daa890ca8b6eee66ae83d2a1a9a10b319e027a917164e604e32a6845a35" address="unix:///run/containerd/s/a39e61366da046c2eae127e1b68af0dad4025652e847076e6fbe90e6b9d609e5" protocol=ttrpc version=3 Dec 16 13:03:20.208924 systemd[1]: Started cri-containerd-d09c1daa890ca8b6eee66ae83d2a1a9a10b319e027a917164e604e32a6845a35.scope - libcontainer container d09c1daa890ca8b6eee66ae83d2a1a9a10b319e027a917164e604e32a6845a35. Dec 16 13:03:20.554178 containerd[1583]: time="2025-12-16T13:03:20.554127204Z" level=info msg="StartContainer for \"d09c1daa890ca8b6eee66ae83d2a1a9a10b319e027a917164e604e32a6845a35\" returns successfully" Dec 16 13:03:21.009210 kubelet[1919]: E1216 13:03:21.009175 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:21.103786 kubelet[1919]: E1216 13:03:21.103728 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:22.009470 kubelet[1919]: E1216 13:03:22.009423 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:23.010165 kubelet[1919]: E1216 13:03:23.010102 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:23.104484 kubelet[1919]: E1216 13:03:23.104409 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:24.011077 kubelet[1919]: E1216 13:03:24.011016 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:24.412473 containerd[1583]: time="2025-12-16T13:03:24.412316394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:24.413068 containerd[1583]: time="2025-12-16T13:03:24.413024392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:03:24.414167 containerd[1583]: time="2025-12-16T13:03:24.414122622Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:24.416602 containerd[1583]: time="2025-12-16T13:03:24.416563972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:24.417350 containerd[1583]: time="2025-12-16T13:03:24.417313167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.276864235s" Dec 16 13:03:24.417350 containerd[1583]: time="2025-12-16T13:03:24.417346790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:03:24.421876 containerd[1583]: time="2025-12-16T13:03:24.421842853Z" level=info msg="CreateContainer within sandbox \"a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:03:24.430499 containerd[1583]: time="2025-12-16T13:03:24.430445950Z" level=info msg="Container af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:24.440727 containerd[1583]: time="2025-12-16T13:03:24.440668575Z" level=info msg="CreateContainer within sandbox \"a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e\"" Dec 16 13:03:24.441244 containerd[1583]: time="2025-12-16T13:03:24.441205101Z" level=info msg="StartContainer for \"af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e\"" Dec 16 13:03:24.444304 containerd[1583]: time="2025-12-16T13:03:24.444267315Z" level=info msg="connecting to shim af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e" address="unix:///run/containerd/s/fa72413e6c7e90c6fe966dbec058dfcd54d9c2ec74c71660f6b491307ed3f593" protocol=ttrpc version=3 Dec 16 13:03:24.497987 systemd[1]: Started cri-containerd-af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e.scope - libcontainer container af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e. Dec 16 13:03:24.600328 containerd[1583]: time="2025-12-16T13:03:24.600279750Z" level=info msg="StartContainer for \"af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e\" returns successfully" Dec 16 13:03:25.012542 kubelet[1919]: E1216 13:03:25.012114 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:25.104287 kubelet[1919]: E1216 13:03:25.103911 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:25.154809 kubelet[1919]: I1216 13:03:25.154668 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wdm79" podStartSLOduration=8.101433304 podStartE2EDuration="12.15463931s" podCreationTimestamp="2025-12-16 13:03:13 +0000 UTC" firstStartedPulling="2025-12-16 13:03:16.086994751 +0000 UTC m=+4.772421911" lastFinishedPulling="2025-12-16 13:03:20.140200757 +0000 UTC m=+8.825627917" observedRunningTime="2025-12-16 13:03:21.506530521 +0000 UTC m=+10.191957681" watchObservedRunningTime="2025-12-16 13:03:25.15463931 +0000 UTC m=+13.840066460" Dec 16 13:03:26.012229 kubelet[1919]: E1216 13:03:26.012192 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:26.288275 containerd[1583]: time="2025-12-16T13:03:26.288123683Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:03:26.290769 systemd[1]: cri-containerd-af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e.scope: Deactivated successfully. Dec 16 13:03:26.291165 systemd[1]: cri-containerd-af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e.scope: Consumed 616ms CPU time, 191.3M memory peak, 171.3M written to disk. Dec 16 13:03:26.293205 containerd[1583]: time="2025-12-16T13:03:26.293162414Z" level=info msg="received container exit event container_id:\"af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e\" id:\"af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e\" pid:2345 exited_at:{seconds:1765890206 nanos:292977848}" Dec 16 13:03:26.310408 kubelet[1919]: I1216 13:03:26.310377 1919 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:03:26.315569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af033e769377e4cb0fb55ba4adadc9b4fd6b225c99efa998b6da830815c39f5e-rootfs.mount: Deactivated successfully. Dec 16 13:03:27.012412 kubelet[1919]: E1216 13:03:27.012351 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:27.109866 systemd[1]: Created slice kubepods-besteffort-podfde59053_f0c0_4b62_b3f3_900cee51cff8.slice - libcontainer container kubepods-besteffort-podfde59053_f0c0_4b62_b3f3_900cee51cff8.slice. Dec 16 13:03:27.144407 containerd[1583]: time="2025-12-16T13:03:27.144356982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xz69,Uid:fde59053-f0c0-4b62-b3f3-900cee51cff8,Namespace:calico-system,Attempt:0,}" Dec 16 13:03:27.158756 containerd[1583]: time="2025-12-16T13:03:27.158722648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:03:27.214612 containerd[1583]: time="2025-12-16T13:03:27.214549222Z" level=error msg="Failed to destroy network for sandbox \"af7c0c145adb9c146e617f1d917c941d5b2abe4de5e2c688312e6dc167caa97c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:03:27.216526 containerd[1583]: time="2025-12-16T13:03:27.216450769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xz69,Uid:fde59053-f0c0-4b62-b3f3-900cee51cff8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7c0c145adb9c146e617f1d917c941d5b2abe4de5e2c688312e6dc167caa97c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:03:27.216860 kubelet[1919]: E1216 13:03:27.216812 1919 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7c0c145adb9c146e617f1d917c941d5b2abe4de5e2c688312e6dc167caa97c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:03:27.216938 kubelet[1919]: E1216 13:03:27.216890 1919 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7c0c145adb9c146e617f1d917c941d5b2abe4de5e2c688312e6dc167caa97c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5xz69" Dec 16 13:03:27.216938 kubelet[1919]: E1216 13:03:27.216917 1919 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7c0c145adb9c146e617f1d917c941d5b2abe4de5e2c688312e6dc167caa97c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5xz69" Dec 16 13:03:27.217030 kubelet[1919]: E1216 13:03:27.216995 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5xz69_calico-system(fde59053-f0c0-4b62-b3f3-900cee51cff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5xz69_calico-system(fde59053-f0c0-4b62-b3f3-900cee51cff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af7c0c145adb9c146e617f1d917c941d5b2abe4de5e2c688312e6dc167caa97c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:27.217645 systemd[1]: run-netns-cni\x2d3d26c420\x2d54dc\x2dce54\x2d5904\x2d90345ec44d44.mount: Deactivated successfully. Dec 16 13:03:28.013347 kubelet[1919]: E1216 13:03:28.013257 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:29.013747 kubelet[1919]: E1216 13:03:29.013656 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:29.642765 systemd[1]: Created slice kubepods-besteffort-pod973acc74_a10f_4ba8_bd06_553cae62eb87.slice - libcontainer container kubepods-besteffort-pod973acc74_a10f_4ba8_bd06_553cae62eb87.slice. Dec 16 13:03:29.728117 kubelet[1919]: I1216 13:03:29.727904 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vbf8\" (UniqueName: \"kubernetes.io/projected/973acc74-a10f-4ba8-bd06-553cae62eb87-kube-api-access-7vbf8\") pod \"nginx-deployment-bb8f74bfb-g4xn5\" (UID: \"973acc74-a10f-4ba8-bd06-553cae62eb87\") " pod="default/nginx-deployment-bb8f74bfb-g4xn5" Dec 16 13:03:30.014395 kubelet[1919]: E1216 13:03:30.014323 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:30.078169 containerd[1583]: time="2025-12-16T13:03:30.078108711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-g4xn5,Uid:973acc74-a10f-4ba8-bd06-553cae62eb87,Namespace:default,Attempt:0,}" Dec 16 13:03:31.015352 kubelet[1919]: E1216 13:03:31.015268 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:31.332457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251529760.mount: Deactivated successfully. Dec 16 13:03:31.613203 containerd[1583]: time="2025-12-16T13:03:31.612534856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:31.615547 containerd[1583]: time="2025-12-16T13:03:31.615494287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:03:31.615744 containerd[1583]: time="2025-12-16T13:03:31.615618701Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:31.618098 containerd[1583]: time="2025-12-16T13:03:31.618044922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:31.618858 containerd[1583]: time="2025-12-16T13:03:31.618813433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.460054277s" Dec 16 13:03:31.619817 containerd[1583]: time="2025-12-16T13:03:31.619781529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:03:31.642425 containerd[1583]: time="2025-12-16T13:03:31.642308713Z" level=info msg="CreateContainer within sandbox \"a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:03:31.644640 containerd[1583]: time="2025-12-16T13:03:31.644571067Z" level=error msg="Failed to destroy network for sandbox \"3ed5ff4dad4ced26d0cd58851c529f2ce51b7fafe03263de06bde50bce8eb203\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:03:31.646449 systemd[1]: run-netns-cni\x2dca2bf74e\x2d7d2f\x2d26e8\x2d3a5b\x2d101723663d1b.mount: Deactivated successfully. Dec 16 13:03:31.647529 kubelet[1919]: E1216 13:03:31.647190 1919 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed5ff4dad4ced26d0cd58851c529f2ce51b7fafe03263de06bde50bce8eb203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:03:31.647529 kubelet[1919]: E1216 13:03:31.647249 1919 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed5ff4dad4ced26d0cd58851c529f2ce51b7fafe03263de06bde50bce8eb203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-g4xn5" Dec 16 13:03:31.647529 kubelet[1919]: E1216 13:03:31.647272 1919 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed5ff4dad4ced26d0cd58851c529f2ce51b7fafe03263de06bde50bce8eb203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-g4xn5" Dec 16 13:03:31.647633 containerd[1583]: time="2025-12-16T13:03:31.646912529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-g4xn5,Uid:973acc74-a10f-4ba8-bd06-553cae62eb87,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed5ff4dad4ced26d0cd58851c529f2ce51b7fafe03263de06bde50bce8eb203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:03:31.647716 kubelet[1919]: E1216 13:03:31.647351 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-bb8f74bfb-g4xn5_default(973acc74-a10f-4ba8-bd06-553cae62eb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-bb8f74bfb-g4xn5_default(973acc74-a10f-4ba8-bd06-553cae62eb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ed5ff4dad4ced26d0cd58851c529f2ce51b7fafe03263de06bde50bce8eb203\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-g4xn5" podUID="973acc74-a10f-4ba8-bd06-553cae62eb87" Dec 16 13:03:31.653885 containerd[1583]: time="2025-12-16T13:03:31.653835224Z" level=info msg="Container bea03f23fc7ba1e162d40ad50ff5364657781f0707e364f1c8bc8f3b3f9365f5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:31.666358 containerd[1583]: time="2025-12-16T13:03:31.666298261Z" level=info msg="CreateContainer within sandbox \"a6dc36f49af3c1b92cd1702cd39a4fca4a6de960fe79719f10321946d5cdabef\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bea03f23fc7ba1e162d40ad50ff5364657781f0707e364f1c8bc8f3b3f9365f5\"" Dec 16 13:03:31.667734 containerd[1583]: time="2025-12-16T13:03:31.666974660Z" level=info msg="StartContainer for \"bea03f23fc7ba1e162d40ad50ff5364657781f0707e364f1c8bc8f3b3f9365f5\"" Dec 16 13:03:31.668810 containerd[1583]: time="2025-12-16T13:03:31.668783973Z" level=info msg="connecting to shim bea03f23fc7ba1e162d40ad50ff5364657781f0707e364f1c8bc8f3b3f9365f5" address="unix:///run/containerd/s/fa72413e6c7e90c6fe966dbec058dfcd54d9c2ec74c71660f6b491307ed3f593" protocol=ttrpc version=3 Dec 16 13:03:31.693829 systemd[1]: Started cri-containerd-bea03f23fc7ba1e162d40ad50ff5364657781f0707e364f1c8bc8f3b3f9365f5.scope - libcontainer container bea03f23fc7ba1e162d40ad50ff5364657781f0707e364f1c8bc8f3b3f9365f5. Dec 16 13:03:31.798281 containerd[1583]: time="2025-12-16T13:03:31.798221902Z" level=info msg="StartContainer for \"bea03f23fc7ba1e162d40ad50ff5364657781f0707e364f1c8bc8f3b3f9365f5\" returns successfully" Dec 16 13:03:31.930180 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:03:31.930342 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:03:32.016369 kubelet[1919]: E1216 13:03:32.016271 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:32.187933 kubelet[1919]: I1216 13:03:32.185641 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mkcdg" podStartSLOduration=3.649415462 podStartE2EDuration="19.185595765s" podCreationTimestamp="2025-12-16 13:03:13 +0000 UTC" firstStartedPulling="2025-12-16 13:03:16.084352134 +0000 UTC m=+4.769779294" lastFinishedPulling="2025-12-16 13:03:31.620532437 +0000 UTC m=+20.305959597" observedRunningTime="2025-12-16 13:03:32.185393215 +0000 UTC m=+20.870820375" watchObservedRunningTime="2025-12-16 13:03:32.185595765 +0000 UTC m=+20.871022925" Dec 16 13:03:33.005300 kubelet[1919]: E1216 13:03:33.005230 1919 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:33.016835 kubelet[1919]: E1216 13:03:33.016744 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:33.710952 systemd-networkd[1463]: vxlan.calico: Link UP Dec 16 13:03:33.710964 systemd-networkd[1463]: vxlan.calico: Gained carrier Dec 16 13:03:34.017306 kubelet[1919]: E1216 13:03:34.017166 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:35.017679 kubelet[1919]: E1216 13:03:35.017601 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:35.359976 systemd-networkd[1463]: vxlan.calico: Gained IPv6LL Dec 16 13:03:36.018627 kubelet[1919]: E1216 13:03:36.018546 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:37.019036 kubelet[1919]: E1216 13:03:37.018952 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:38.019723 kubelet[1919]: E1216 13:03:38.019631 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:39.020482 kubelet[1919]: E1216 13:03:39.020396 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:39.161959 containerd[1583]: time="2025-12-16T13:03:39.161907354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xz69,Uid:fde59053-f0c0-4b62-b3f3-900cee51cff8,Namespace:calico-system,Attempt:0,}" Dec 16 13:03:39.272569 systemd-networkd[1463]: calib5fb1ae37fa: Link UP Dec 16 13:03:39.273264 systemd-networkd[1463]: calib5fb1ae37fa: Gained carrier Dec 16 13:03:39.287535 containerd[1583]: 2025-12-16 13:03:39.205 [INFO][2718] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.61-k8s-csi--node--driver--5xz69-eth0 csi-node-driver- calico-system fde59053-f0c0-4b62-b3f3-900cee51cff8 998 0 2025-12-16 13:03:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.61 csi-node-driver-5xz69 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib5fb1ae37fa [] [] }} ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-" Dec 16 13:03:39.287535 containerd[1583]: 2025-12-16 13:03:39.205 [INFO][2718] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" Dec 16 13:03:39.287535 containerd[1583]: 2025-12-16 13:03:39.232 [INFO][2733] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" HandleID="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Workload="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.233 [INFO][2733] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" HandleID="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Workload="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.61", "pod":"csi-node-driver-5xz69", "timestamp":"2025-12-16 13:03:39.232870268 +0000 UTC"}, Hostname:"10.0.0.61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.233 [INFO][2733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.233 [INFO][2733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.233 [INFO][2733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.61' Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.241 [INFO][2733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" host="10.0.0.61" Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.246 [INFO][2733] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.61" Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.250 [INFO][2733] ipam/ipam.go 511: Trying affinity for 192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.252 [INFO][2733] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.255 [INFO][2733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:39.287808 containerd[1583]: 2025-12-16 13:03:39.255 [INFO][2733] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.64/26 handle="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" host="10.0.0.61" Dec 16 13:03:39.288271 containerd[1583]: 2025-12-16 13:03:39.257 [INFO][2733] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8 Dec 16 13:03:39.288271 containerd[1583]: 2025-12-16 13:03:39.261 [INFO][2733] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.64/26 handle="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" host="10.0.0.61" Dec 16 13:03:39.288271 containerd[1583]: 2025-12-16 13:03:39.264 [INFO][2733] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.65/26] block=192.168.68.64/26 handle="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" host="10.0.0.61" Dec 16 13:03:39.288271 containerd[1583]: 2025-12-16 13:03:39.264 [INFO][2733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.65/26] handle="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" host="10.0.0.61" Dec 16 13:03:39.288271 containerd[1583]: 2025-12-16 13:03:39.264 [INFO][2733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:03:39.288271 containerd[1583]: 2025-12-16 13:03:39.264 [INFO][2733] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.65/26] IPv6=[] ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" HandleID="k8s-pod-network.cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Workload="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" Dec 16 13:03:39.288462 containerd[1583]: 2025-12-16 13:03:39.270 [INFO][2718] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-csi--node--driver--5xz69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fde59053-f0c0-4b62-b3f3-900cee51cff8", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"", Pod:"csi-node-driver-5xz69", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5fb1ae37fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:03:39.288545 containerd[1583]: 2025-12-16 13:03:39.270 [INFO][2718] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.65/32] ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" Dec 16 13:03:39.288545 containerd[1583]: 2025-12-16 13:03:39.270 [INFO][2718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5fb1ae37fa ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" Dec 16 13:03:39.288545 containerd[1583]: 2025-12-16 13:03:39.272 [INFO][2718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" Dec 16 13:03:39.288625 containerd[1583]: 2025-12-16 13:03:39.272 [INFO][2718] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-csi--node--driver--5xz69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fde59053-f0c0-4b62-b3f3-900cee51cff8", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8", Pod:"csi-node-driver-5xz69", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5fb1ae37fa", MAC:"12:e1:28:ba:05:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:03:39.288734 containerd[1583]: 2025-12-16 13:03:39.283 [INFO][2718] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" Namespace="calico-system" Pod="csi-node-driver-5xz69" WorkloadEndpoint="10.0.0.61-k8s-csi--node--driver--5xz69-eth0" Dec 16 13:03:39.326431 containerd[1583]: time="2025-12-16T13:03:39.326360981Z" level=info msg="connecting to shim cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8" address="unix:///run/containerd/s/a19d8e3836818bbe39e7c8df77a687916f6a2752187f7877a29fdcf4128ab3f8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:39.361872 systemd[1]: Started cri-containerd-cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8.scope - libcontainer container cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8. Dec 16 13:03:39.378247 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:03:39.398218 containerd[1583]: time="2025-12-16T13:03:39.398162524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xz69,Uid:fde59053-f0c0-4b62-b3f3-900cee51cff8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cadcf4feb90cb40866bfa0de76e31f815406c8e16140d00657e6d4b0b02eebe8\"" Dec 16 13:03:39.399821 containerd[1583]: time="2025-12-16T13:03:39.399800206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:03:39.744727 containerd[1583]: time="2025-12-16T13:03:39.744624378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:03:39.745981 containerd[1583]: time="2025-12-16T13:03:39.745922147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:03:39.746073 containerd[1583]: time="2025-12-16T13:03:39.746009244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:03:39.746272 kubelet[1919]: E1216 13:03:39.746220 1919 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:03:39.746330 kubelet[1919]: E1216 13:03:39.746288 1919 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:03:39.746449 kubelet[1919]: E1216 13:03:39.746404 1919 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5xz69_calico-system(fde59053-f0c0-4b62-b3f3-900cee51cff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:03:39.747245 containerd[1583]: time="2025-12-16T13:03:39.747219116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:03:40.021298 kubelet[1919]: E1216 13:03:40.021148 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:40.084886 containerd[1583]: time="2025-12-16T13:03:40.084841446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:03:40.086066 containerd[1583]: time="2025-12-16T13:03:40.086030544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:03:40.086134 containerd[1583]: time="2025-12-16T13:03:40.086069149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:03:40.086365 kubelet[1919]: E1216 13:03:40.086313 1919 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:03:40.086477 kubelet[1919]: E1216 13:03:40.086373 1919 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:03:40.086524 kubelet[1919]: E1216 13:03:40.086483 1919 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5xz69_calico-system(fde59053-f0c0-4b62-b3f3-900cee51cff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:03:40.086587 kubelet[1919]: E1216 13:03:40.086533 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:40.188942 kubelet[1919]: E1216 13:03:40.188877 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:40.607904 systemd-networkd[1463]: calib5fb1ae37fa: Gained IPv6LL Dec 16 13:03:41.022152 kubelet[1919]: E1216 13:03:41.022077 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:41.190087 kubelet[1919]: E1216 13:03:41.190019 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:03:42.022758 kubelet[1919]: E1216 13:03:42.022671 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:43.023889 kubelet[1919]: E1216 13:03:43.023841 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:43.111117 containerd[1583]: time="2025-12-16T13:03:43.111076860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-g4xn5,Uid:973acc74-a10f-4ba8-bd06-553cae62eb87,Namespace:default,Attempt:0,}" Dec 16 13:03:43.216734 systemd-networkd[1463]: cali0f459354df3: Link UP Dec 16 13:03:43.217928 systemd-networkd[1463]: cali0f459354df3: Gained carrier Dec 16 13:03:43.231183 containerd[1583]: 2025-12-16 13:03:43.149 [INFO][2803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0 nginx-deployment-bb8f74bfb- default 973acc74-a10f-4ba8-bd06-553cae62eb87 1147 0 2025-12-16 13:03:29 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.61 nginx-deployment-bb8f74bfb-g4xn5 eth0 default [] [] [kns.default ksa.default.default] cali0f459354df3 [] [] }} ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-" Dec 16 13:03:43.231183 containerd[1583]: 2025-12-16 13:03:43.150 [INFO][2803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" Dec 16 13:03:43.231183 containerd[1583]: 2025-12-16 13:03:43.176 [INFO][2817] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" HandleID="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Workload="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.176 [INFO][2817] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" HandleID="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Workload="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.61", "pod":"nginx-deployment-bb8f74bfb-g4xn5", "timestamp":"2025-12-16 13:03:43.176214804 +0000 UTC"}, Hostname:"10.0.0.61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.176 [INFO][2817] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.176 [INFO][2817] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.177 [INFO][2817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.61' Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.183 [INFO][2817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" host="10.0.0.61" Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.187 [INFO][2817] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.61" Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.191 [INFO][2817] ipam/ipam.go 511: Trying affinity for 192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.193 [INFO][2817] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.196 [INFO][2817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:43.231437 containerd[1583]: 2025-12-16 13:03:43.196 [INFO][2817] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.64/26 handle="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" host="10.0.0.61" Dec 16 13:03:43.231797 containerd[1583]: 2025-12-16 13:03:43.197 [INFO][2817] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60 Dec 16 13:03:43.231797 containerd[1583]: 2025-12-16 13:03:43.202 [INFO][2817] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.64/26 handle="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" host="10.0.0.61" Dec 16 13:03:43.231797 containerd[1583]: 2025-12-16 13:03:43.210 [INFO][2817] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.66/26] block=192.168.68.64/26 handle="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" host="10.0.0.61" Dec 16 13:03:43.231797 containerd[1583]: 2025-12-16 13:03:43.210 [INFO][2817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.66/26] handle="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" host="10.0.0.61" Dec 16 13:03:43.231797 containerd[1583]: 2025-12-16 13:03:43.210 [INFO][2817] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:03:43.231797 containerd[1583]: 2025-12-16 13:03:43.210 [INFO][2817] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.66/26] IPv6=[] ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" HandleID="k8s-pod-network.061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Workload="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" Dec 16 13:03:43.231976 containerd[1583]: 2025-12-16 13:03:43.213 [INFO][2803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"973acc74-a10f-4ba8-bd06-553cae62eb87", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-g4xn5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0f459354df3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:03:43.231976 containerd[1583]: 2025-12-16 13:03:43.213 [INFO][2803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.66/32] ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" Dec 16 13:03:43.232088 containerd[1583]: 2025-12-16 13:03:43.213 [INFO][2803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f459354df3 ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" Dec 16 13:03:43.232088 containerd[1583]: 2025-12-16 13:03:43.217 [INFO][2803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" Dec 16 13:03:43.232149 containerd[1583]: 2025-12-16 13:03:43.218 [INFO][2803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"973acc74-a10f-4ba8-bd06-553cae62eb87", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60", Pod:"nginx-deployment-bb8f74bfb-g4xn5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0f459354df3", MAC:"f2:9f:81:97:8c:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:03:43.232224 containerd[1583]: 2025-12-16 13:03:43.227 [INFO][2803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" Namespace="default" Pod="nginx-deployment-bb8f74bfb-g4xn5" WorkloadEndpoint="10.0.0.61-k8s-nginx--deployment--bb8f74bfb--g4xn5-eth0" Dec 16 13:03:43.261419 containerd[1583]: time="2025-12-16T13:03:43.260065126Z" level=info msg="connecting to shim 061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60" address="unix:///run/containerd/s/287c10666581c1e562d1f81e556d7375bac35ebfe475789742922979edbfdbb3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:43.288925 systemd[1]: Started cri-containerd-061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60.scope - libcontainer container 061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60. Dec 16 13:03:43.303403 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:03:43.335842 containerd[1583]: time="2025-12-16T13:03:43.335785659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-g4xn5,Uid:973acc74-a10f-4ba8-bd06-553cae62eb87,Namespace:default,Attempt:0,} returns sandbox id \"061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60\"" Dec 16 13:03:43.336965 containerd[1583]: time="2025-12-16T13:03:43.336851995Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 16 13:03:43.576033 kubelet[1919]: I1216 13:03:43.575864 1919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:03:44.024709 kubelet[1919]: E1216 13:03:44.024635 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:44.447872 systemd-networkd[1463]: cali0f459354df3: Gained IPv6LL Dec 16 13:03:45.025363 kubelet[1919]: E1216 13:03:45.025307 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:45.731904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846974347.mount: Deactivated successfully. Dec 16 13:03:46.027004 kubelet[1919]: E1216 13:03:46.026847 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:47.027194 kubelet[1919]: E1216 13:03:47.027116 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:47.305935 containerd[1583]: time="2025-12-16T13:03:47.305801735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:47.306671 containerd[1583]: time="2025-12-16T13:03:47.306643014Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73312370" Dec 16 13:03:47.307791 containerd[1583]: time="2025-12-16T13:03:47.307754709Z" level=info msg="ImageCreate event name:\"sha256:34e04bb6b4bb37d45845842374be0cd181723daffb230849b1984aaeaa96faba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:47.312756 containerd[1583]: time="2025-12-16T13:03:47.312707495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:73a15a021433343835d9908f25bf01b8d42a2113a41e9c9e28b6a89b82b54f96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:47.313788 containerd[1583]: time="2025-12-16T13:03:47.313731322Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:34e04bb6b4bb37d45845842374be0cd181723daffb230849b1984aaeaa96faba\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:73a15a021433343835d9908f25bf01b8d42a2113a41e9c9e28b6a89b82b54f96\", size \"73312248\" in 3.976835763s" Dec 16 13:03:47.313892 containerd[1583]: time="2025-12-16T13:03:47.313797737Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:34e04bb6b4bb37d45845842374be0cd181723daffb230849b1984aaeaa96faba\"" Dec 16 13:03:47.318699 containerd[1583]: time="2025-12-16T13:03:47.318643570Z" level=info msg="CreateContainer within sandbox \"061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 16 13:03:47.327102 containerd[1583]: time="2025-12-16T13:03:47.327061225Z" level=info msg="Container a2804f2af3f1a3654664acb4fdf198bcd42f49ae189144c366da54040f7d7dd5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:47.333287 containerd[1583]: time="2025-12-16T13:03:47.333242527Z" level=info msg="CreateContainer within sandbox \"061caba0e8ef9f2890aa61c0bdd82831db3979322152ccaa1e0e534dcc259e60\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a2804f2af3f1a3654664acb4fdf198bcd42f49ae189144c366da54040f7d7dd5\"" Dec 16 13:03:47.333789 containerd[1583]: time="2025-12-16T13:03:47.333740214Z" level=info msg="StartContainer for \"a2804f2af3f1a3654664acb4fdf198bcd42f49ae189144c366da54040f7d7dd5\"" Dec 16 13:03:47.334662 containerd[1583]: time="2025-12-16T13:03:47.334636538Z" level=info msg="connecting to shim a2804f2af3f1a3654664acb4fdf198bcd42f49ae189144c366da54040f7d7dd5" address="unix:///run/containerd/s/287c10666581c1e562d1f81e556d7375bac35ebfe475789742922979edbfdbb3" protocol=ttrpc version=3 Dec 16 13:03:47.415851 systemd[1]: Started cri-containerd-a2804f2af3f1a3654664acb4fdf198bcd42f49ae189144c366da54040f7d7dd5.scope - libcontainer container a2804f2af3f1a3654664acb4fdf198bcd42f49ae189144c366da54040f7d7dd5. Dec 16 13:03:47.451150 containerd[1583]: time="2025-12-16T13:03:47.451103100Z" level=info msg="StartContainer for \"a2804f2af3f1a3654664acb4fdf198bcd42f49ae189144c366da54040f7d7dd5\" returns successfully" Dec 16 13:03:48.027675 kubelet[1919]: E1216 13:03:48.027617 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:48.273606 kubelet[1919]: I1216 13:03:48.273515 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-g4xn5" podStartSLOduration=15.295720651 podStartE2EDuration="19.273491093s" podCreationTimestamp="2025-12-16 13:03:29 +0000 UTC" firstStartedPulling="2025-12-16 13:03:43.336606136 +0000 UTC m=+32.022033296" lastFinishedPulling="2025-12-16 13:03:47.314376578 +0000 UTC m=+35.999803738" observedRunningTime="2025-12-16 13:03:48.273143423 +0000 UTC m=+36.958570583" watchObservedRunningTime="2025-12-16 13:03:48.273491093 +0000 UTC m=+36.958918264" Dec 16 13:03:49.028900 kubelet[1919]: E1216 13:03:49.028791 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:49.570974 update_engine[1565]: I20251216 13:03:49.570878 1565 update_attempter.cc:509] Updating boot flags... Dec 16 13:03:50.029226 kubelet[1919]: E1216 13:03:50.029156 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:51.029613 kubelet[1919]: E1216 13:03:51.029551 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:52.030225 kubelet[1919]: E1216 13:03:52.030188 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:52.031266 systemd[1]: Created slice kubepods-besteffort-pod9032de2f_6d6c_43da_967c_97a61f7e0ce4.slice - libcontainer container kubepods-besteffort-pod9032de2f_6d6c_43da_967c_97a61f7e0ce4.slice. Dec 16 13:03:52.164793 kubelet[1919]: I1216 13:03:52.164732 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9032de2f-6d6c-43da-967c-97a61f7e0ce4-data\") pod \"nfs-server-provisioner-0\" (UID: \"9032de2f-6d6c-43da-967c-97a61f7e0ce4\") " pod="default/nfs-server-provisioner-0" Dec 16 13:03:52.164793 kubelet[1919]: I1216 13:03:52.164789 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pcz\" (UniqueName: \"kubernetes.io/projected/9032de2f-6d6c-43da-967c-97a61f7e0ce4-kube-api-access-74pcz\") pod \"nfs-server-provisioner-0\" (UID: \"9032de2f-6d6c-43da-967c-97a61f7e0ce4\") " pod="default/nfs-server-provisioner-0" Dec 16 13:03:52.815625 containerd[1583]: time="2025-12-16T13:03:52.815563771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9032de2f-6d6c-43da-967c-97a61f7e0ce4,Namespace:default,Attempt:0,}" Dec 16 13:03:53.005412 kubelet[1919]: E1216 13:03:53.005357 1919 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:53.010953 systemd-networkd[1463]: cali60e51b789ff: Link UP Dec 16 13:03:53.013332 systemd-networkd[1463]: cali60e51b789ff: Gained carrier Dec 16 13:03:53.030843 kubelet[1919]: E1216 13:03:53.030791 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:53.139093 containerd[1583]: 2025-12-16 13:03:52.916 [INFO][3048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.61-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 9032de2f-6d6c-43da-967c-97a61f7e0ce4 1301 0 2025-12-16 13:03:52 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.61 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-" Dec 16 13:03:53.139093 containerd[1583]: 2025-12-16 13:03:52.917 [INFO][3048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" Dec 16 13:03:53.139093 containerd[1583]: 2025-12-16 13:03:52.939 [INFO][3063] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" HandleID="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Workload="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.939 [INFO][3063] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" HandleID="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Workload="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134800), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.61", "pod":"nfs-server-provisioner-0", "timestamp":"2025-12-16 13:03:52.939614118 +0000 UTC"}, Hostname:"10.0.0.61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.939 [INFO][3063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.939 [INFO][3063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.939 [INFO][3063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.61' Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.946 [INFO][3063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" host="10.0.0.61" Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.950 [INFO][3063] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.61" Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.954 [INFO][3063] ipam/ipam.go 511: Trying affinity for 192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.955 [INFO][3063] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.957 [INFO][3063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:03:53.139645 containerd[1583]: 2025-12-16 13:03:52.957 [INFO][3063] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.64/26 handle="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" host="10.0.0.61" Dec 16 13:03:53.139992 containerd[1583]: 2025-12-16 13:03:52.958 [INFO][3063] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24 Dec 16 13:03:53.139992 containerd[1583]: 2025-12-16 13:03:52.982 [INFO][3063] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.64/26 handle="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" host="10.0.0.61" Dec 16 13:03:53.139992 containerd[1583]: 2025-12-16 13:03:53.003 [INFO][3063] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.67/26] block=192.168.68.64/26 handle="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" host="10.0.0.61" Dec 16 13:03:53.139992 containerd[1583]: 2025-12-16 13:03:53.003 [INFO][3063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.67/26] handle="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" host="10.0.0.61" Dec 16 13:03:53.139992 containerd[1583]: 2025-12-16 13:03:53.003 [INFO][3063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:03:53.139992 containerd[1583]: 2025-12-16 13:03:53.003 [INFO][3063] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.67/26] IPv6=[] ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" HandleID="k8s-pod-network.d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Workload="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" Dec 16 13:03:53.140128 containerd[1583]: 2025-12-16 13:03:53.006 [INFO][3048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9032de2f-6d6c-43da-967c-97a61f7e0ce4", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.68.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:03:53.140128 containerd[1583]: 2025-12-16 13:03:53.006 [INFO][3048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.67/32] ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" Dec 16 13:03:53.140128 containerd[1583]: 2025-12-16 13:03:53.006 [INFO][3048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" Dec 16 13:03:53.140128 containerd[1583]: 2025-12-16 13:03:53.014 [INFO][3048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" Dec 16 13:03:53.140280 containerd[1583]: 2025-12-16 13:03:53.015 [INFO][3048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9032de2f-6d6c-43da-967c-97a61f7e0ce4", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.68.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1a:7e:a2:e4:ec:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:03:53.140280 containerd[1583]: 2025-12-16 13:03:53.135 [INFO][3048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.61-k8s-nfs--server--provisioner--0-eth0" Dec 16 13:03:53.438389 containerd[1583]: time="2025-12-16T13:03:53.438334326Z" level=info msg="connecting to shim d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24" address="unix:///run/containerd/s/e3891a1d774138397ef5730923165038969a06a5a43cc21930cea68e835abcf3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:03:53.469862 systemd[1]: Started cri-containerd-d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24.scope - libcontainer container d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24. Dec 16 13:03:53.484046 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:03:53.585781 containerd[1583]: time="2025-12-16T13:03:53.585730034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9032de2f-6d6c-43da-967c-97a61f7e0ce4,Namespace:default,Attempt:0,} returns sandbox id \"d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24\"" Dec 16 13:03:53.587524 containerd[1583]: time="2025-12-16T13:03:53.587475419Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 16 13:03:54.031995 kubelet[1919]: E1216 13:03:54.031940 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:54.047978 systemd-networkd[1463]: cali60e51b789ff: Gained IPv6LL Dec 16 13:03:55.032898 kubelet[1919]: E1216 13:03:55.032847 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:56.033681 kubelet[1919]: E1216 13:03:56.033632 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:56.077798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150533957.mount: Deactivated successfully. Dec 16 13:03:57.034059 kubelet[1919]: E1216 13:03:57.033999 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:57.741647 containerd[1583]: time="2025-12-16T13:03:57.741569857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:57.742375 containerd[1583]: time="2025-12-16T13:03:57.742301620Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 16 13:03:57.743510 containerd[1583]: time="2025-12-16T13:03:57.743466921Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:57.746351 containerd[1583]: time="2025-12-16T13:03:57.746304601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:03:57.747708 containerd[1583]: time="2025-12-16T13:03:57.747431199Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.159879316s" Dec 16 13:03:57.747708 containerd[1583]: time="2025-12-16T13:03:57.747475383Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 16 13:03:57.750268 containerd[1583]: time="2025-12-16T13:03:57.750235317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:03:57.755890 containerd[1583]: time="2025-12-16T13:03:57.755856425Z" level=info msg="CreateContainer within sandbox \"d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 16 13:03:57.765032 containerd[1583]: time="2025-12-16T13:03:57.764978479Z" level=info msg="Container 0948ee1dae2a7a9d6267b568d709f37b1cfff24a2b7d61593b14f9f441722d64: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:03:57.768558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772103623.mount: Deactivated successfully. Dec 16 13:03:57.774530 containerd[1583]: time="2025-12-16T13:03:57.774478184Z" level=info msg="CreateContainer within sandbox \"d54e4fca28dca241d13d15b6abcaa16c3f0e028711e370aac84be09cc1596c24\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0948ee1dae2a7a9d6267b568d709f37b1cfff24a2b7d61593b14f9f441722d64\"" Dec 16 13:03:57.777263 containerd[1583]: time="2025-12-16T13:03:57.777226656Z" level=info msg="StartContainer for \"0948ee1dae2a7a9d6267b568d709f37b1cfff24a2b7d61593b14f9f441722d64\"" Dec 16 13:03:57.778246 containerd[1583]: time="2025-12-16T13:03:57.778214273Z" level=info msg="connecting to shim 0948ee1dae2a7a9d6267b568d709f37b1cfff24a2b7d61593b14f9f441722d64" address="unix:///run/containerd/s/e3891a1d774138397ef5730923165038969a06a5a43cc21930cea68e835abcf3" protocol=ttrpc version=3 Dec 16 13:03:57.804992 systemd[1]: Started cri-containerd-0948ee1dae2a7a9d6267b568d709f37b1cfff24a2b7d61593b14f9f441722d64.scope - libcontainer container 0948ee1dae2a7a9d6267b568d709f37b1cfff24a2b7d61593b14f9f441722d64. Dec 16 13:03:57.841752 containerd[1583]: time="2025-12-16T13:03:57.841679775Z" level=info msg="StartContainer for \"0948ee1dae2a7a9d6267b568d709f37b1cfff24a2b7d61593b14f9f441722d64\" returns successfully" Dec 16 13:03:58.034456 kubelet[1919]: E1216 13:03:58.034307 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:58.125344 containerd[1583]: time="2025-12-16T13:03:58.125284453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:03:58.748269 containerd[1583]: time="2025-12-16T13:03:58.748156987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:03:58.748269 containerd[1583]: time="2025-12-16T13:03:58.748256024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:03:58.748898 kubelet[1919]: E1216 13:03:58.748438 1919 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:03:58.748898 kubelet[1919]: E1216 13:03:58.748485 1919 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:03:58.748898 kubelet[1919]: E1216 13:03:58.748587 1919 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5xz69_calico-system(fde59053-f0c0-4b62-b3f3-900cee51cff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:03:58.749659 containerd[1583]: time="2025-12-16T13:03:58.749630589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:03:59.035174 kubelet[1919]: E1216 13:03:59.035024 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:03:59.248854 containerd[1583]: time="2025-12-16T13:03:59.248792942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:03:59.338881 containerd[1583]: time="2025-12-16T13:03:59.338756167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:03:59.338881 containerd[1583]: time="2025-12-16T13:03:59.338847569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:03:59.339129 kubelet[1919]: E1216 13:03:59.339082 1919 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:03:59.339189 kubelet[1919]: E1216 13:03:59.339138 1919 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:03:59.339255 kubelet[1919]: E1216 13:03:59.339227 1919 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5xz69_calico-system(fde59053-f0c0-4b62-b3f3-900cee51cff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:03:59.339319 kubelet[1919]: E1216 13:03:59.339286 1919 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5xz69" podUID="fde59053-f0c0-4b62-b3f3-900cee51cff8" Dec 16 13:04:00.035846 kubelet[1919]: E1216 13:04:00.035774 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:01.036403 kubelet[1919]: E1216 13:04:01.036348 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:02.037484 kubelet[1919]: E1216 13:04:02.037414 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:03.037879 kubelet[1919]: E1216 13:04:03.037809 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:03.513088 kubelet[1919]: I1216 13:04:03.513006 1919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=7.350097978 podStartE2EDuration="11.512986611s" podCreationTimestamp="2025-12-16 13:03:52 +0000 UTC" firstStartedPulling="2025-12-16 13:03:53.587115397 +0000 UTC m=+42.272542557" lastFinishedPulling="2025-12-16 13:03:57.75000403 +0000 UTC m=+46.435431190" observedRunningTime="2025-12-16 13:03:58.293954027 +0000 UTC m=+46.979381207" watchObservedRunningTime="2025-12-16 13:04:03.512986611 +0000 UTC m=+52.198413771" Dec 16 13:04:03.523424 systemd[1]: Created slice kubepods-besteffort-pod3499b68a_8ab3_467c_bc3d_61dc1678353e.slice - libcontainer container kubepods-besteffort-pod3499b68a_8ab3_467c_bc3d_61dc1678353e.slice. Dec 16 13:04:03.622884 kubelet[1919]: I1216 13:04:03.622807 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cs2n\" (UniqueName: \"kubernetes.io/projected/3499b68a-8ab3-467c-bc3d-61dc1678353e-kube-api-access-7cs2n\") pod \"test-pod-1\" (UID: \"3499b68a-8ab3-467c-bc3d-61dc1678353e\") " pod="default/test-pod-1" Dec 16 13:04:03.622884 kubelet[1919]: I1216 13:04:03.622869 1919 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7bd9ca7-939e-4aad-82dc-1d06d0d57dbd\" (UniqueName: \"kubernetes.io/nfs/3499b68a-8ab3-467c-bc3d-61dc1678353e-pvc-d7bd9ca7-939e-4aad-82dc-1d06d0d57dbd\") pod \"test-pod-1\" (UID: \"3499b68a-8ab3-467c-bc3d-61dc1678353e\") " pod="default/test-pod-1" Dec 16 13:04:03.797733 kernel: netfs: FS-Cache loaded Dec 16 13:04:03.867769 kernel: RPC: Registered named UNIX socket transport module. Dec 16 13:04:03.867916 kernel: RPC: Registered udp transport module. Dec 16 13:04:03.867933 kernel: RPC: Registered tcp transport module. Dec 16 13:04:03.868910 kernel: RPC: Registered tcp-with-tls transport module. Dec 16 13:04:03.906329 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 16 13:04:04.038916 kubelet[1919]: E1216 13:04:04.038835 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:04.168105 kernel: NFS: Registering the id_resolver key type Dec 16 13:04:04.168269 kernel: Key type id_resolver registered Dec 16 13:04:04.168309 kernel: Key type id_legacy registered Dec 16 13:04:04.196950 nfsidmap[3260]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Dec 16 13:04:04.197522 nfsidmap[3260]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 16 13:04:04.199824 nfsidmap[3261]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Dec 16 13:04:04.200008 nfsidmap[3261]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 16 13:04:04.207783 nfsrahead[3263]: setting /var/lib/kubelet/pods/3499b68a-8ab3-467c-bc3d-61dc1678353e/volumes/kubernetes.io~nfs/pvc-d7bd9ca7-939e-4aad-82dc-1d06d0d57dbd readahead to 128 Dec 16 13:04:04.512091 containerd[1583]: time="2025-12-16T13:04:04.511968181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3499b68a-8ab3-467c-bc3d-61dc1678353e,Namespace:default,Attempt:0,}" Dec 16 13:04:04.613926 systemd-networkd[1463]: cali5ec59c6bf6e: Link UP Dec 16 13:04:04.614778 systemd-networkd[1463]: cali5ec59c6bf6e: Gained carrier Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.547 [INFO][3265] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.61-k8s-test--pod--1-eth0 default 3499b68a-8ab3-467c-bc3d-61dc1678353e 1387 0 2025-12-16 13:03:52 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.61 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.547 [INFO][3265] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-eth0" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.573 [INFO][3278] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" HandleID="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Workload="10.0.0.61-k8s-test--pod--1-eth0" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.573 [INFO][3278] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" HandleID="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Workload="10.0.0.61-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.61", "pod":"test-pod-1", "timestamp":"2025-12-16 13:04:04.573635678 +0000 UTC"}, Hostname:"10.0.0.61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.573 [INFO][3278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.573 [INFO][3278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.573 [INFO][3278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.61' Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.580 [INFO][3278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.584 [INFO][3278] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.591 [INFO][3278] ipam/ipam.go 511: Trying affinity for 192.168.68.64/26 host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.592 [INFO][3278] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.595 [INFO][3278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.64/26 host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.595 [INFO][3278] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.64/26 handle="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.596 [INFO][3278] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.600 [INFO][3278] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.64/26 handle="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.608 [INFO][3278] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.68/26] block=192.168.68.64/26 handle="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.608 [INFO][3278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.68/26] handle="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" host="10.0.0.61" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.608 [INFO][3278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.608 [INFO][3278] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.68/26] IPv6=[] ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" HandleID="k8s-pod-network.974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Workload="10.0.0.61-k8s-test--pod--1-eth0" Dec 16 13:04:04.622571 containerd[1583]: 2025-12-16 13:04:04.611 [INFO][3265] cni-plugin/k8s.go 418: Populated endpoint ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3499b68a-8ab3-467c-bc3d-61dc1678353e", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:04:04.623578 containerd[1583]: 2025-12-16 13:04:04.611 [INFO][3265] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.68/32] ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-eth0" Dec 16 13:04:04.623578 containerd[1583]: 2025-12-16 13:04:04.611 [INFO][3265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-eth0" Dec 16 13:04:04.623578 containerd[1583]: 2025-12-16 13:04:04.613 [INFO][3265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-eth0" Dec 16 13:04:04.623578 containerd[1583]: 2025-12-16 13:04:04.613 [INFO][3265] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.61-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3499b68a-8ab3-467c-bc3d-61dc1678353e", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 3, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.61", ContainerID:"974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"32:ce:70:41:0e:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:04:04.623578 containerd[1583]: 2025-12-16 13:04:04.619 [INFO][3265] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.61-k8s-test--pod--1-eth0" Dec 16 13:04:04.686549 containerd[1583]: time="2025-12-16T13:04:04.686478981Z" level=info msg="connecting to shim 974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab" address="unix:///run/containerd/s/24c944eb3db5932e2abc5d44a6649ab3744616379b12a62e12f13a98528a3d9c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:04.719029 systemd[1]: Started cri-containerd-974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab.scope - libcontainer container 974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab. Dec 16 13:04:04.741612 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:04:04.773492 containerd[1583]: time="2025-12-16T13:04:04.773354356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3499b68a-8ab3-467c-bc3d-61dc1678353e,Namespace:default,Attempt:0,} returns sandbox id \"974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab\"" Dec 16 13:04:04.774828 containerd[1583]: time="2025-12-16T13:04:04.774799990Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 16 13:04:05.039373 kubelet[1919]: E1216 13:04:05.039218 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:05.116836 containerd[1583]: time="2025-12-16T13:04:05.116777327Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:05.117788 containerd[1583]: time="2025-12-16T13:04:05.117741193Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 16 13:04:05.119942 containerd[1583]: time="2025-12-16T13:04:05.119899939Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:34e04bb6b4bb37d45845842374be0cd181723daffb230849b1984aaeaa96faba\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:73a15a021433343835d9908f25bf01b8d42a2113a41e9c9e28b6a89b82b54f96\", size \"73312248\" in 345.067919ms" Dec 16 13:04:05.119942 containerd[1583]: time="2025-12-16T13:04:05.119926719Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:34e04bb6b4bb37d45845842374be0cd181723daffb230849b1984aaeaa96faba\"" Dec 16 13:04:05.125178 containerd[1583]: time="2025-12-16T13:04:05.125129009Z" level=info msg="CreateContainer within sandbox \"974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 16 13:04:05.134789 containerd[1583]: time="2025-12-16T13:04:05.134743470Z" level=info msg="Container 489c4084f50bf1ae846fc564bbf125f22ba79e952ca2bb574e397de27d261b5f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:05.138551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898419047.mount: Deactivated successfully. Dec 16 13:04:05.144766 containerd[1583]: time="2025-12-16T13:04:05.144732056Z" level=info msg="CreateContainer within sandbox \"974b2a02a02a58de1084ce881789710e263313adb5ced4668a95998a377fb2ab\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"489c4084f50bf1ae846fc564bbf125f22ba79e952ca2bb574e397de27d261b5f\"" Dec 16 13:04:05.145343 containerd[1583]: time="2025-12-16T13:04:05.145295658Z" level=info msg="StartContainer for \"489c4084f50bf1ae846fc564bbf125f22ba79e952ca2bb574e397de27d261b5f\"" Dec 16 13:04:05.146367 containerd[1583]: time="2025-12-16T13:04:05.146341207Z" level=info msg="connecting to shim 489c4084f50bf1ae846fc564bbf125f22ba79e952ca2bb574e397de27d261b5f" address="unix:///run/containerd/s/24c944eb3db5932e2abc5d44a6649ab3744616379b12a62e12f13a98528a3d9c" protocol=ttrpc version=3 Dec 16 13:04:05.171901 systemd[1]: Started cri-containerd-489c4084f50bf1ae846fc564bbf125f22ba79e952ca2bb574e397de27d261b5f.scope - libcontainer container 489c4084f50bf1ae846fc564bbf125f22ba79e952ca2bb574e397de27d261b5f. Dec 16 13:04:05.204518 containerd[1583]: time="2025-12-16T13:04:05.204474413Z" level=info msg="StartContainer for \"489c4084f50bf1ae846fc564bbf125f22ba79e952ca2bb574e397de27d261b5f\" returns successfully" Dec 16 13:04:06.039398 kubelet[1919]: E1216 13:04:06.039317 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:06.271897 systemd-networkd[1463]: cali5ec59c6bf6e: Gained IPv6LL Dec 16 13:04:07.039879 kubelet[1919]: E1216 13:04:07.039796 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:08.040346 kubelet[1919]: E1216 13:04:08.040275 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:09.041328 kubelet[1919]: E1216 13:04:09.041251 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 13:04:10.041917 kubelet[1919]: E1216 13:04:10.041850 1919 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"