May 8 23:55:42.894164 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 23:55:42.894186 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu May 8 22:43:24 -00 2025 May 8 23:55:42.894195 kernel: KASLR enabled May 8 23:55:42.894201 kernel: efi: EFI v2.7 by EDK II May 8 23:55:42.894207 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 23:55:42.894212 kernel: random: crng init done May 8 23:55:42.894220 kernel: ACPI: Early table checksum verification disabled May 8 23:55:42.894226 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 23:55:42.894232 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 23:55:42.894240 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894246 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894252 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894258 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894264 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894272 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894280 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894286 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894293 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:42.894299 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 23:55:42.894305 kernel: NUMA: Failed to initialise from firmware May 8 23:55:42.894312 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:55:42.894318 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 23:55:42.894324 kernel: Zone ranges: May 8 23:55:42.894331 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:55:42.894338 kernel: DMA32 empty May 8 23:55:42.894346 kernel: Normal empty May 8 23:55:42.894353 kernel: Movable zone start for each node May 8 23:55:42.894359 kernel: Early memory node ranges May 8 23:55:42.894366 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 23:55:42.894372 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 23:55:42.894378 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 23:55:42.894385 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 23:55:42.894391 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 23:55:42.894397 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 23:55:42.894404 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 23:55:42.894410 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:55:42.894416 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 23:55:42.894424 kernel: psci: probing for conduit method from ACPI. May 8 23:55:42.894431 kernel: psci: PSCIv1.1 detected in firmware. May 8 23:55:42.894437 kernel: psci: Using standard PSCI v0.2 function IDs May 8 23:55:42.894446 kernel: psci: Trusted OS migration not required May 8 23:55:42.894453 kernel: psci: SMC Calling Convention v1.1 May 8 23:55:42.894460 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 23:55:42.894469 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 8 23:55:42.894476 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 8 23:55:42.894483 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 23:55:42.894490 kernel: Detected PIPT I-cache on CPU0 May 8 23:55:42.894496 kernel: CPU features: detected: GIC system register CPU interface May 8 23:55:42.894518 kernel: CPU features: detected: Hardware dirty bit management May 8 23:55:42.894525 kernel: CPU features: detected: Spectre-v4 May 8 23:55:42.894532 kernel: CPU features: detected: Spectre-BHB May 8 23:55:42.894539 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 23:55:42.894551 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 23:55:42.894559 kernel: CPU features: detected: ARM erratum 1418040 May 8 23:55:42.894566 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 23:55:42.894572 kernel: alternatives: applying boot alternatives May 8 23:55:42.894580 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 8 23:55:42.894588 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 23:55:42.894595 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 23:55:42.894602 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 23:55:42.894608 kernel: Fallback order for Node 0: 0 May 8 23:55:42.894615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 23:55:42.894622 kernel: Policy zone: DMA May 8 23:55:42.894629 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 23:55:42.894638 kernel: software IO TLB: area num 4. May 8 23:55:42.894646 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 23:55:42.894653 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 8 23:55:42.894660 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 23:55:42.894667 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 23:55:42.894674 kernel: rcu: RCU event tracing is enabled. May 8 23:55:42.894682 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 23:55:42.894689 kernel: Trampoline variant of Tasks RCU enabled. May 8 23:55:42.894707 kernel: Tracing variant of Tasks RCU enabled. May 8 23:55:42.894714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 23:55:42.894721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 23:55:42.894728 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 23:55:42.894737 kernel: GICv3: 256 SPIs implemented May 8 23:55:42.894744 kernel: GICv3: 0 Extended SPIs implemented May 8 23:55:42.894751 kernel: Root IRQ handler: gic_handle_irq May 8 23:55:42.894758 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 23:55:42.894765 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 23:55:42.894771 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 23:55:42.894778 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 23:55:42.894785 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 23:55:42.894792 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 23:55:42.894799 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 23:55:42.894805 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 23:55:42.894813 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:42.894820 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 23:55:42.894827 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 23:55:42.894834 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 23:55:42.894841 kernel: arm-pv: using stolen time PV May 8 23:55:42.894848 kernel: Console: colour dummy device 80x25 May 8 23:55:42.894855 kernel: ACPI: Core revision 20230628 May 8 23:55:42.894862 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 23:55:42.894869 kernel: pid_max: default: 32768 minimum: 301 May 8 23:55:42.894876 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 23:55:42.894885 kernel: landlock: Up and running. May 8 23:55:42.894892 kernel: SELinux: Initializing. May 8 23:55:42.894899 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:55:42.894906 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:55:42.894923 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 8 23:55:42.894932 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:55:42.894939 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:55:42.894946 kernel: rcu: Hierarchical SRCU implementation. May 8 23:55:42.894953 kernel: rcu: Max phase no-delay instances is 400. May 8 23:55:42.894961 kernel: Platform MSI: ITS@0x8080000 domain created May 8 23:55:42.894968 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 23:55:42.894975 kernel: Remapping and enabling EFI services. May 8 23:55:42.894982 kernel: smp: Bringing up secondary CPUs ... May 8 23:55:42.894989 kernel: Detected PIPT I-cache on CPU1 May 8 23:55:42.894996 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 23:55:42.895003 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 23:55:42.895011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:42.895017 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 23:55:42.895026 kernel: Detected PIPT I-cache on CPU2 May 8 23:55:42.895033 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 23:55:42.895040 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 23:55:42.895052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:42.895061 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 23:55:42.895068 kernel: Detected PIPT I-cache on CPU3 May 8 23:55:42.895075 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 23:55:42.895083 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 23:55:42.895090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:42.895097 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 23:55:42.895105 kernel: smp: Brought up 1 node, 4 CPUs May 8 23:55:42.895113 kernel: SMP: Total of 4 processors activated. May 8 23:55:42.895121 kernel: CPU features: detected: 32-bit EL0 Support May 8 23:55:42.895128 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 23:55:42.895148 kernel: CPU features: detected: Common not Private translations May 8 23:55:42.895156 kernel: CPU features: detected: CRC32 instructions May 8 23:55:42.895163 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 23:55:42.895170 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 23:55:42.895179 kernel: CPU features: detected: LSE atomic instructions May 8 23:55:42.895186 kernel: CPU features: detected: Privileged Access Never May 8 23:55:42.895193 kernel: CPU features: detected: RAS Extension Support May 8 23:55:42.895200 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 23:55:42.895208 kernel: CPU: All CPU(s) started at EL1 May 8 23:55:42.895215 kernel: alternatives: applying system-wide alternatives May 8 23:55:42.895222 kernel: devtmpfs: initialized May 8 23:55:42.895230 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 23:55:42.895237 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 23:55:42.895246 kernel: pinctrl core: initialized pinctrl subsystem May 8 23:55:42.895253 kernel: SMBIOS 3.0.0 present. May 8 23:55:42.895261 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 23:55:42.895268 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 23:55:42.895275 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 23:55:42.895283 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 23:55:42.895290 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 23:55:42.895297 kernel: audit: initializing netlink subsys (disabled) May 8 23:55:42.895305 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 May 8 23:55:42.895313 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 23:55:42.895321 kernel: cpuidle: using governor menu May 8 23:55:42.895328 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 23:55:42.895335 kernel: ASID allocator initialised with 32768 entries May 8 23:55:42.895343 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 23:55:42.895350 kernel: Serial: AMBA PL011 UART driver May 8 23:55:42.895358 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 23:55:42.895365 kernel: Modules: 0 pages in range for non-PLT usage May 8 23:55:42.895372 kernel: Modules: 509008 pages in range for PLT usage May 8 23:55:42.895381 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 23:55:42.895389 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 23:55:42.895396 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 23:55:42.895404 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 23:55:42.895411 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 23:55:42.895418 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 23:55:42.895426 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 23:55:42.895433 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 23:55:42.895440 kernel: ACPI: Added _OSI(Module Device) May 8 23:55:42.895449 kernel: ACPI: Added _OSI(Processor Device) May 8 23:55:42.895456 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 23:55:42.895463 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 23:55:42.895470 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 23:55:42.895482 kernel: ACPI: Interpreter enabled May 8 23:55:42.895489 kernel: ACPI: Using GIC for interrupt routing May 8 23:55:42.895496 kernel: ACPI: MCFG table detected, 1 entries May 8 23:55:42.895504 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 23:55:42.895511 kernel: printk: console [ttyAMA0] enabled May 8 23:55:42.895520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 23:55:42.895656 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 23:55:42.895744 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 23:55:42.895812 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 23:55:42.895878 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 23:55:42.895966 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 23:55:42.895977 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 23:55:42.895989 kernel: PCI host bridge to bus 0000:00 May 8 23:55:42.896066 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 23:55:42.896129 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 23:55:42.896191 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 23:55:42.896253 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 23:55:42.896335 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 23:55:42.896413 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 23:55:42.896488 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 23:55:42.896555 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 23:55:42.896622 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:55:42.896690 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:55:42.896771 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 23:55:42.896842 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 23:55:42.896903 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 23:55:42.896981 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 23:55:42.897043 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 23:55:42.897053 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 23:55:42.897060 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 23:55:42.897068 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 23:55:42.897075 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 23:55:42.897083 kernel: iommu: Default domain type: Translated May 8 23:55:42.897091 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 23:55:42.897101 kernel: efivars: Registered efivars operations May 8 23:55:42.897109 kernel: vgaarb: loaded May 8 23:55:42.897116 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 23:55:42.897123 kernel: VFS: Disk quotas dquot_6.6.0 May 8 23:55:42.897131 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 23:55:42.897138 kernel: pnp: PnP ACPI init May 8 23:55:42.897217 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 23:55:42.897228 kernel: pnp: PnP ACPI: found 1 devices May 8 23:55:42.897237 kernel: NET: Registered PF_INET protocol family May 8 23:55:42.897245 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 23:55:42.897252 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 23:55:42.897260 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 23:55:42.897267 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 23:55:42.897275 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 23:55:42.897282 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 23:55:42.897289 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:55:42.897297 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:55:42.897305 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 23:55:42.897313 kernel: PCI: CLS 0 bytes, default 64 May 8 23:55:42.897320 kernel: kvm [1]: HYP mode not available May 8 23:55:42.897328 kernel: Initialise system trusted keyrings May 8 23:55:42.897335 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 23:55:42.897342 kernel: Key type asymmetric registered May 8 23:55:42.897349 kernel: Asymmetric key parser 'x509' registered May 8 23:55:42.897357 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 23:55:42.897364 kernel: io scheduler mq-deadline registered May 8 23:55:42.897373 kernel: io scheduler kyber registered May 8 23:55:42.897381 kernel: io scheduler bfq registered May 8 23:55:42.897389 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 23:55:42.897396 kernel: ACPI: button: Power Button [PWRB] May 8 23:55:42.897404 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 23:55:42.897476 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 23:55:42.897486 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 23:55:42.897493 kernel: thunder_xcv, ver 1.0 May 8 23:55:42.897501 kernel: thunder_bgx, ver 1.0 May 8 23:55:42.897510 kernel: nicpf, ver 1.0 May 8 23:55:42.897517 kernel: nicvf, ver 1.0 May 8 23:55:42.897593 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 23:55:42.897658 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T23:55:42 UTC (1746748542) May 8 23:55:42.897668 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 23:55:42.897676 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 23:55:42.897683 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 23:55:42.897691 kernel: watchdog: Hard watchdog permanently disabled May 8 23:55:42.897709 kernel: NET: Registered PF_INET6 protocol family May 8 23:55:42.897717 kernel: Segment Routing with IPv6 May 8 23:55:42.897724 kernel: In-situ OAM (IOAM) with IPv6 May 8 23:55:42.897732 kernel: NET: Registered PF_PACKET protocol family May 8 23:55:42.897739 kernel: Key type dns_resolver registered May 8 23:55:42.897746 kernel: registered taskstats version 1 May 8 23:55:42.897753 kernel: Loading compiled-in X.509 certificates May 8 23:55:42.897761 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 7944e0e0bec5e8cad487856da19569eba337cea0' May 8 23:55:42.897769 kernel: Key type .fscrypt registered May 8 23:55:42.897778 kernel: Key type fscrypt-provisioning registered May 8 23:55:42.897785 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 23:55:42.897793 kernel: ima: Allocated hash algorithm: sha1 May 8 23:55:42.897800 kernel: ima: No architecture policies found May 8 23:55:42.897807 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 23:55:42.897814 kernel: clk: Disabling unused clocks May 8 23:55:42.897822 kernel: Freeing unused kernel memory: 39424K May 8 23:55:42.897829 kernel: Run /init as init process May 8 23:55:42.897836 kernel: with arguments: May 8 23:55:42.897845 kernel: /init May 8 23:55:42.897852 kernel: with environment: May 8 23:55:42.897858 kernel: HOME=/ May 8 23:55:42.897866 kernel: TERM=linux May 8 23:55:42.897873 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 23:55:42.897882 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:55:42.897891 systemd[1]: Detected virtualization kvm. May 8 23:55:42.897899 systemd[1]: Detected architecture arm64. May 8 23:55:42.897908 systemd[1]: Running in initrd. May 8 23:55:42.897931 systemd[1]: No hostname configured, using default hostname. May 8 23:55:42.897939 systemd[1]: Hostname set to . May 8 23:55:42.897947 systemd[1]: Initializing machine ID from VM UUID. May 8 23:55:42.897955 systemd[1]: Queued start job for default target initrd.target. May 8 23:55:42.897963 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:55:42.897970 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:55:42.897979 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 23:55:42.897989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:55:42.897997 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 23:55:42.898005 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 23:55:42.898014 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 23:55:42.898022 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 23:55:42.898030 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:55:42.898039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:55:42.898048 systemd[1]: Reached target paths.target - Path Units. May 8 23:55:42.898056 systemd[1]: Reached target slices.target - Slice Units. May 8 23:55:42.898064 systemd[1]: Reached target swap.target - Swaps. May 8 23:55:42.898072 systemd[1]: Reached target timers.target - Timer Units. May 8 23:55:42.898080 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:55:42.898087 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:55:42.898095 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:55:42.898104 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:55:42.898113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:55:42.898121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:55:42.898129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:55:42.898136 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:55:42.898144 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 23:55:42.898152 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:55:42.898160 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 23:55:42.898168 systemd[1]: Starting systemd-fsck-usr.service... May 8 23:55:42.898175 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:55:42.898185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:55:42.898193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:42.898201 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 23:55:42.898209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:55:42.898217 systemd[1]: Finished systemd-fsck-usr.service. May 8 23:55:42.898243 systemd-journald[237]: Collecting audit messages is disabled. May 8 23:55:42.898263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 23:55:42.898272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:55:42.898282 kernel: Bridge firewalling registered May 8 23:55:42.898290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:42.898298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:55:42.898306 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:55:42.898315 systemd-journald[237]: Journal started May 8 23:55:42.898334 systemd-journald[237]: Runtime Journal (/run/log/journal/a531d7f2a18148c9904f4a39e838b4d6) is 5.9M, max 47.3M, 41.4M free. May 8 23:55:42.877609 systemd-modules-load[238]: Inserted module 'overlay' May 8 23:55:42.892909 systemd-modules-load[238]: Inserted module 'br_netfilter' May 8 23:55:42.902602 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:55:42.905438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:55:42.907037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:55:42.910048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:55:42.912112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:55:42.920267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:55:42.923496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:55:42.924830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:55:42.935098 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:55:42.936232 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:42.939085 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 23:55:42.951425 dracut-cmdline[278]: dracut-dracut-053 May 8 23:55:42.953829 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 8 23:55:42.964165 systemd-resolved[275]: Positive Trust Anchors: May 8 23:55:42.964181 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:55:42.964212 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:55:42.968870 systemd-resolved[275]: Defaulting to hostname 'linux'. May 8 23:55:42.970121 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:55:42.974900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:55:43.030951 kernel: SCSI subsystem initialized May 8 23:55:43.034932 kernel: Loading iSCSI transport class v2.0-870. May 8 23:55:43.042947 kernel: iscsi: registered transport (tcp) May 8 23:55:43.055946 kernel: iscsi: registered transport (qla4xxx) May 8 23:55:43.055982 kernel: QLogic iSCSI HBA Driver May 8 23:55:43.097079 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 23:55:43.114085 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 23:55:43.131374 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 23:55:43.131440 kernel: device-mapper: uevent: version 1.0.3 May 8 23:55:43.131458 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 23:55:43.178949 kernel: raid6: neonx8 gen() 15708 MB/s May 8 23:55:43.195939 kernel: raid6: neonx4 gen() 15659 MB/s May 8 23:55:43.212933 kernel: raid6: neonx2 gen() 13223 MB/s May 8 23:55:43.229946 kernel: raid6: neonx1 gen() 10502 MB/s May 8 23:55:43.246936 kernel: raid6: int64x8 gen() 6937 MB/s May 8 23:55:43.263951 kernel: raid6: int64x4 gen() 7352 MB/s May 8 23:55:43.280944 kernel: raid6: int64x2 gen() 6112 MB/s May 8 23:55:43.297953 kernel: raid6: int64x1 gen() 5061 MB/s May 8 23:55:43.297991 kernel: raid6: using algorithm neonx8 gen() 15708 MB/s May 8 23:55:43.314954 kernel: raid6: .... xor() 11874 MB/s, rmw enabled May 8 23:55:43.314994 kernel: raid6: using neon recovery algorithm May 8 23:55:43.319950 kernel: xor: measuring software checksum speed May 8 23:55:43.319965 kernel: 8regs : 19793 MB/sec May 8 23:55:43.320956 kernel: 32regs : 19683 MB/sec May 8 23:55:43.320969 kernel: arm64_neon : 25584 MB/sec May 8 23:55:43.320978 kernel: xor: using function: arm64_neon (25584 MB/sec) May 8 23:55:43.372971 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 23:55:43.382963 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 23:55:43.398173 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:55:43.410244 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 8 23:55:43.413359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:55:43.425238 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 23:55:43.436319 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 8 23:55:43.462686 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:55:43.468099 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:55:43.509090 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:55:43.516087 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 23:55:43.531760 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 23:55:43.532906 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:55:43.534647 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:55:43.536529 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:55:43.543069 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 23:55:43.550687 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 23:55:43.550883 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 23:55:43.552958 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 23:55:43.561063 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 23:55:43.561098 kernel: GPT:9289727 != 19775487 May 8 23:55:43.561108 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 23:55:43.561118 kernel: GPT:9289727 != 19775487 May 8 23:55:43.561133 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 23:55:43.561952 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:55:43.567737 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:55:43.567857 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:43.572126 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:55:43.573391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:55:43.573542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:43.583069 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (510) May 8 23:55:43.577668 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:43.585941 kernel: BTRFS: device fsid 9a510efc-c158-4845-bfb8-279f8b20070f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (523) May 8 23:55:43.592222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:43.601979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:43.611569 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 23:55:43.616054 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 23:55:43.620665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:55:43.624467 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 23:55:43.625710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 23:55:43.643068 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 23:55:43.648106 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:55:43.651283 disk-uuid[551]: Primary Header is updated. May 8 23:55:43.651283 disk-uuid[551]: Secondary Entries is updated. May 8 23:55:43.651283 disk-uuid[551]: Secondary Header is updated. May 8 23:55:43.654946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:55:43.674361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:44.665943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:55:44.666467 disk-uuid[552]: The operation has completed successfully. May 8 23:55:44.690014 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 23:55:44.690117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 23:55:44.710123 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 23:55:44.717225 sh[574]: Success May 8 23:55:44.741377 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 23:55:44.786639 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 23:55:44.798991 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 23:55:44.802957 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 23:55:44.814117 kernel: BTRFS info (device dm-0): first mount of filesystem 9a510efc-c158-4845-bfb8-279f8b20070f May 8 23:55:44.814163 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:44.814175 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 23:55:44.815948 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 23:55:44.816008 kernel: BTRFS info (device dm-0): using free space tree May 8 23:55:44.819585 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 23:55:44.821204 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 23:55:44.822373 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 23:55:44.824850 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 23:55:44.839544 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:44.839598 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:44.839610 kernel: BTRFS info (device vda6): using free space tree May 8 23:55:44.841927 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:55:44.864358 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 23:55:44.865336 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:44.872099 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 23:55:44.882141 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 23:55:44.961665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:55:44.981166 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:55:45.023556 systemd-networkd[761]: lo: Link UP May 8 23:55:45.023567 systemd-networkd[761]: lo: Gained carrier May 8 23:55:45.024348 systemd-networkd[761]: Enumeration completed May 8 23:55:45.024833 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:55:45.025176 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:45.025179 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:55:45.026046 systemd-networkd[761]: eth0: Link UP May 8 23:55:45.026049 systemd-networkd[761]: eth0: Gained carrier May 8 23:55:45.026056 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:45.030439 systemd[1]: Reached target network.target - Network. May 8 23:55:45.049001 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:55:45.052529 ignition[673]: Ignition 2.19.0 May 8 23:55:45.052539 ignition[673]: Stage: fetch-offline May 8 23:55:45.052573 ignition[673]: no configs at "/usr/lib/ignition/base.d" May 8 23:55:45.052582 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:45.052930 ignition[673]: parsed url from cmdline: "" May 8 23:55:45.052933 ignition[673]: no config URL provided May 8 23:55:45.052938 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:55:45.052945 ignition[673]: no config at "/usr/lib/ignition/user.ign" May 8 23:55:45.052969 ignition[673]: op(1): [started] loading QEMU firmware config module May 8 23:55:45.052974 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 23:55:45.073693 ignition[673]: op(1): [finished] loading QEMU firmware config module May 8 23:55:45.073717 ignition[673]: QEMU firmware config was not found. Ignoring... May 8 23:55:45.080249 ignition[673]: parsing config with SHA512: 5790027d2bb3748d007bcbd6aa8dcdd47e26d0a216c9395ff98b9b0dd718818b1faa9ea28163c4bdbc9efbcdc34992e19ed58791e079ad389997dca1b5fd524c May 8 23:55:45.084740 unknown[673]: fetched base config from "system" May 8 23:55:45.084749 unknown[673]: fetched user config from "qemu" May 8 23:55:45.085072 ignition[673]: fetch-offline: fetch-offline passed May 8 23:55:45.086695 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:55:45.085137 ignition[673]: Ignition finished successfully May 8 23:55:45.088782 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 23:55:45.099140 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 23:55:45.116456 ignition[774]: Ignition 2.19.0 May 8 23:55:45.116467 ignition[774]: Stage: kargs May 8 23:55:45.116710 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 8 23:55:45.116720 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:45.117495 ignition[774]: kargs: kargs passed May 8 23:55:45.117537 ignition[774]: Ignition finished successfully May 8 23:55:45.121182 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 23:55:45.128100 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 23:55:45.142241 ignition[782]: Ignition 2.19.0 May 8 23:55:45.142252 ignition[782]: Stage: disks May 8 23:55:45.142424 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 8 23:55:45.142434 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:45.143202 ignition[782]: disks: disks passed May 8 23:55:45.145626 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 23:55:45.143247 ignition[782]: Ignition finished successfully May 8 23:55:45.147537 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 23:55:45.149044 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:55:45.150762 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:55:45.152442 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:55:45.154410 systemd[1]: Reached target basic.target - Basic System. May 8 23:55:45.170075 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 23:55:45.183626 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 23:55:45.190592 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 23:55:45.212081 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 23:55:45.256932 kernel: EXT4-fs (vda9): mounted filesystem 1a8c7c5d-87ec-4bc4-aa01-1ebc1d3c20e7 r/w with ordered data mode. Quota mode: none. May 8 23:55:45.257070 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 23:55:45.258348 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 23:55:45.279054 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:55:45.281035 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 23:55:45.282241 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 23:55:45.282357 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 23:55:45.282384 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:55:45.290770 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) May 8 23:55:45.290792 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:45.290803 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:45.288807 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 23:55:45.293427 kernel: BTRFS info (device vda6): using free space tree May 8 23:55:45.296260 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:55:45.310064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 23:55:45.312040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:55:45.347128 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory May 8 23:55:45.351171 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory May 8 23:55:45.354969 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory May 8 23:55:45.358540 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory May 8 23:55:45.426720 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 23:55:45.435994 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 23:55:45.437532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 23:55:45.442934 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:45.455977 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 23:55:45.460831 ignition[914]: INFO : Ignition 2.19.0 May 8 23:55:45.460831 ignition[914]: INFO : Stage: mount May 8 23:55:45.463062 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:55:45.463062 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:45.463062 ignition[914]: INFO : mount: mount passed May 8 23:55:45.463062 ignition[914]: INFO : Ignition finished successfully May 8 23:55:45.463599 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 23:55:45.475026 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 23:55:45.813420 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 23:55:45.824094 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:55:45.828927 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) May 8 23:55:45.828953 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:45.830429 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:45.830444 kernel: BTRFS info (device vda6): using free space tree May 8 23:55:45.832928 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:55:45.833773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:55:45.849491 ignition[945]: INFO : Ignition 2.19.0 May 8 23:55:45.849491 ignition[945]: INFO : Stage: files May 8 23:55:45.851142 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:55:45.851142 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:45.851142 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 8 23:55:45.854488 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 23:55:45.854488 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 23:55:45.854488 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 23:55:45.854488 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 23:55:45.854488 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 23:55:45.854488 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 8 23:55:45.854488 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 8 23:55:45.853364 unknown[945]: wrote ssh authorized keys file for user: core May 8 23:55:45.865161 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:55:45.865161 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:55:45.865161 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:55:45.865161 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:55:45.865161 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:55:45.865161 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 23:55:46.136270 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 8 23:55:46.549876 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:55:46.549876 ignition[945]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 8 23:55:46.553518 ignition[945]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:55:46.553518 ignition[945]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:55:46.553518 ignition[945]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 8 23:55:46.553518 ignition[945]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 8 23:55:46.575104 ignition[945]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:55:46.578965 ignition[945]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:55:46.580268 ignition[945]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 8 23:55:46.580268 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 23:55:46.580268 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 23:55:46.580268 ignition[945]: INFO : files: files passed May 8 23:55:46.580268 ignition[945]: INFO : Ignition finished successfully May 8 23:55:46.582024 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 23:55:46.598078 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 23:55:46.599869 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 23:55:46.602238 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 23:55:46.602350 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 23:55:46.607350 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 8 23:55:46.610390 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:55:46.610390 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 23:55:46.613376 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:55:46.612882 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:55:46.614653 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 23:55:46.629105 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 23:55:46.640062 systemd-networkd[761]: eth0: Gained IPv6LL May 8 23:55:46.647903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 23:55:46.648030 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 23:55:46.650269 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 23:55:46.651975 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 23:55:46.653550 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 23:55:46.654325 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 23:55:46.668881 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:55:46.678047 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 23:55:46.686742 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 23:55:46.687979 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:55:46.690026 systemd[1]: Stopped target timers.target - Timer Units. May 8 23:55:46.691806 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 23:55:46.691936 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:55:46.694436 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 23:55:46.696387 systemd[1]: Stopped target basic.target - Basic System. May 8 23:55:46.697994 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 23:55:46.699699 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:55:46.701644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 23:55:46.703595 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 23:55:46.705457 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:55:46.707373 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 23:55:46.709305 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 23:55:46.710993 systemd[1]: Stopped target swap.target - Swaps. May 8 23:55:46.712503 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 23:55:46.712635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 23:55:46.714877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 23:55:46.716928 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:55:46.718855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 23:55:46.722986 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:55:46.724330 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 23:55:46.724443 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 23:55:46.727183 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 23:55:46.727301 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:55:46.729266 systemd[1]: Stopped target paths.target - Path Units. May 8 23:55:46.730805 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 23:55:46.735003 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:55:46.736359 systemd[1]: Stopped target slices.target - Slice Units. May 8 23:55:46.738402 systemd[1]: Stopped target sockets.target - Socket Units. May 8 23:55:46.739892 systemd[1]: iscsid.socket: Deactivated successfully. May 8 23:55:46.740001 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:55:46.741503 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 23:55:46.741584 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:55:46.743065 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 23:55:46.743172 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:55:46.744963 systemd[1]: ignition-files.service: Deactivated successfully. May 8 23:55:46.745066 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 23:55:46.760081 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 23:55:46.760972 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 23:55:46.761100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:55:46.763730 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 23:55:46.764591 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 23:55:46.764722 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:55:46.766551 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 23:55:46.766643 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:55:46.773234 ignition[999]: INFO : Ignition 2.19.0 May 8 23:55:46.773234 ignition[999]: INFO : Stage: umount May 8 23:55:46.773231 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 23:55:46.777889 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:55:46.777889 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:46.777889 ignition[999]: INFO : umount: umount passed May 8 23:55:46.777889 ignition[999]: INFO : Ignition finished successfully May 8 23:55:46.773317 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 23:55:46.776065 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 23:55:46.776158 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 23:55:46.778153 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 23:55:46.778534 systemd[1]: Stopped target network.target - Network. May 8 23:55:46.779687 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 23:55:46.779756 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 23:55:46.781537 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 23:55:46.781585 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 23:55:46.783047 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 23:55:46.783090 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 23:55:46.785449 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 23:55:46.785495 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 23:55:46.787449 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 23:55:46.789050 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 23:55:46.797155 systemd-networkd[761]: eth0: DHCPv6 lease lost May 8 23:55:46.798613 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 23:55:46.798737 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 23:55:46.800410 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 23:55:46.800527 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 23:55:46.803104 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 23:55:46.803145 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 23:55:46.813019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 23:55:46.813895 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 23:55:46.813978 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:55:46.815844 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:55:46.815889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:55:46.817735 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 23:55:46.817779 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 23:55:46.819795 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 23:55:46.819838 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:55:46.821819 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:55:46.832618 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 23:55:46.832752 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 23:55:46.835523 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 23:55:46.836602 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:55:46.837879 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 23:55:46.837936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 23:55:46.839641 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 23:55:46.839671 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:55:46.841400 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 23:55:46.841447 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 23:55:46.843835 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 23:55:46.843882 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 23:55:46.846029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:55:46.846067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:46.859092 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 23:55:46.860139 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 23:55:46.860208 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:55:46.862203 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:55:46.862250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:46.864371 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 23:55:46.864454 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 23:55:46.866130 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 23:55:46.866205 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 23:55:46.868388 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 23:55:46.869350 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 23:55:46.869400 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 23:55:46.871556 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 23:55:46.881735 systemd[1]: Switching root. May 8 23:55:46.907844 systemd-journald[237]: Journal stopped May 8 23:55:47.572070 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 8 23:55:47.572128 kernel: SELinux: policy capability network_peer_controls=1 May 8 23:55:47.572141 kernel: SELinux: policy capability open_perms=1 May 8 23:55:47.572154 kernel: SELinux: policy capability extended_socket_class=1 May 8 23:55:47.572163 kernel: SELinux: policy capability always_check_network=0 May 8 23:55:47.572173 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 23:55:47.572182 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 23:55:47.572191 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 23:55:47.572200 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 23:55:47.572210 kernel: audit: type=1403 audit(1746748547.048:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 23:55:47.572220 systemd[1]: Successfully loaded SELinux policy in 32.668ms. May 8 23:55:47.572248 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.281ms. May 8 23:55:47.572260 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:55:47.572271 systemd[1]: Detected virtualization kvm. May 8 23:55:47.572282 systemd[1]: Detected architecture arm64. May 8 23:55:47.572292 systemd[1]: Detected first boot. May 8 23:55:47.572302 systemd[1]: Initializing machine ID from VM UUID. May 8 23:55:47.572313 zram_generator::config[1045]: No configuration found. May 8 23:55:47.572324 systemd[1]: Populated /etc with preset unit settings. May 8 23:55:47.572334 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 23:55:47.572347 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 23:55:47.572359 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 23:55:47.572370 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 23:55:47.572381 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 23:55:47.572391 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 23:55:47.572403 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 23:55:47.572414 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 23:55:47.572425 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 23:55:47.572436 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 23:55:47.572446 systemd[1]: Created slice user.slice - User and Session Slice. May 8 23:55:47.572457 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:55:47.572468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:55:47.572479 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 23:55:47.572490 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 23:55:47.572503 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 23:55:47.572513 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:55:47.572523 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 23:55:47.572534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:55:47.572544 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 23:55:47.572554 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 23:55:47.572565 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 23:55:47.572578 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 23:55:47.572588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:55:47.572599 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:55:47.572610 systemd[1]: Reached target slices.target - Slice Units. May 8 23:55:47.572620 systemd[1]: Reached target swap.target - Swaps. May 8 23:55:47.572631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 23:55:47.572641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 23:55:47.572651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:55:47.572662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:55:47.572672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:55:47.572691 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 23:55:47.572702 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 23:55:47.572714 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 23:55:47.572724 systemd[1]: Mounting media.mount - External Media Directory... May 8 23:55:47.572735 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 23:55:47.572745 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 23:55:47.572755 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 23:55:47.572766 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 23:55:47.572779 systemd[1]: Reached target machines.target - Containers. May 8 23:55:47.572790 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 23:55:47.572801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:55:47.572811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:55:47.572822 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 23:55:47.572833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:55:47.572843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:55:47.572854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:55:47.572864 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 23:55:47.572876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:55:47.572887 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 23:55:47.572898 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 23:55:47.572908 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 23:55:47.572934 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 23:55:47.572945 systemd[1]: Stopped systemd-fsck-usr.service. May 8 23:55:47.572954 kernel: fuse: init (API version 7.39) May 8 23:55:47.572965 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:55:47.572977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:55:47.572989 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 23:55:47.572999 kernel: ACPI: bus type drm_connector registered May 8 23:55:47.573009 kernel: loop: module loaded May 8 23:55:47.573023 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 23:55:47.573055 systemd-journald[1112]: Collecting audit messages is disabled. May 8 23:55:47.573078 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:55:47.573088 systemd[1]: verity-setup.service: Deactivated successfully. May 8 23:55:47.573101 systemd[1]: Stopped verity-setup.service. May 8 23:55:47.573112 systemd-journald[1112]: Journal started May 8 23:55:47.573134 systemd-journald[1112]: Runtime Journal (/run/log/journal/a531d7f2a18148c9904f4a39e838b4d6) is 5.9M, max 47.3M, 41.4M free. May 8 23:55:47.392826 systemd[1]: Queued start job for default target multi-user.target. May 8 23:55:47.412911 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 23:55:47.413277 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 23:55:47.576948 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:55:47.577401 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 23:55:47.578357 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 23:55:47.579316 systemd[1]: Mounted media.mount - External Media Directory. May 8 23:55:47.580438 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 23:55:47.581377 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 23:55:47.582346 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 23:55:47.583348 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 23:55:47.584467 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:55:47.585637 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 23:55:47.585792 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 23:55:47.587044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:55:47.587191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:55:47.588245 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:55:47.588388 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:55:47.589427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:55:47.589572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:55:47.591064 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 23:55:47.591205 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 23:55:47.592236 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:55:47.592374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:55:47.593474 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:55:47.595123 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 23:55:47.596268 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 23:55:47.608475 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 23:55:47.618040 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 23:55:47.620201 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 23:55:47.621017 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 23:55:47.621050 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:55:47.623059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 23:55:47.624959 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 23:55:47.626743 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 23:55:47.627611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:55:47.629114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 23:55:47.630798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 23:55:47.631852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:55:47.635144 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 23:55:47.636279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:55:47.641024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:55:47.647611 systemd-journald[1112]: Time spent on flushing to /var/log/journal/a531d7f2a18148c9904f4a39e838b4d6 is 20.498ms for 838 entries. May 8 23:55:47.647611 systemd-journald[1112]: System Journal (/var/log/journal/a531d7f2a18148c9904f4a39e838b4d6) is 8.0M, max 195.6M, 187.6M free. May 8 23:55:47.677233 systemd-journald[1112]: Received client request to flush runtime journal. May 8 23:55:47.677305 kernel: loop0: detected capacity change from 0 to 194096 May 8 23:55:47.645143 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 23:55:47.649508 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 23:55:47.653042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:55:47.654499 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 23:55:47.655582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 23:55:47.656627 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 23:55:47.657795 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 23:55:47.661266 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 23:55:47.678137 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 23:55:47.680253 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 23:55:47.682090 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 23:55:47.684236 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:55:47.688010 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 23:55:47.694027 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 23:55:47.694789 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 23:55:47.702561 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 23:55:47.713925 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 23:55:47.720953 kernel: loop1: detected capacity change from 0 to 114328 May 8 23:55:47.722072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:55:47.743357 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 23:55:47.743657 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 23:55:47.749381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:55:47.764958 kernel: loop2: detected capacity change from 0 to 114432 May 8 23:55:47.800025 kernel: loop3: detected capacity change from 0 to 194096 May 8 23:55:47.805993 kernel: loop4: detected capacity change from 0 to 114328 May 8 23:55:47.809966 kernel: loop5: detected capacity change from 0 to 114432 May 8 23:55:47.813272 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 23:55:47.813645 (sd-merge)[1182]: Merged extensions into '/usr'. May 8 23:55:47.816972 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... May 8 23:55:47.816986 systemd[1]: Reloading... May 8 23:55:47.855970 zram_generator::config[1207]: No configuration found. May 8 23:55:47.926464 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 23:55:47.953760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:55:47.989863 systemd[1]: Reloading finished in 172 ms. May 8 23:55:48.024403 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 23:55:48.027350 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 23:55:48.039125 systemd[1]: Starting ensure-sysext.service... May 8 23:55:48.040756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:55:48.052255 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... May 8 23:55:48.052271 systemd[1]: Reloading... May 8 23:55:48.061841 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 23:55:48.062129 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 23:55:48.063045 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 23:55:48.063285 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 23:55:48.063348 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 23:55:48.068507 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:55:48.068520 systemd-tmpfiles[1243]: Skipping /boot May 8 23:55:48.075817 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:55:48.075830 systemd-tmpfiles[1243]: Skipping /boot May 8 23:55:48.097945 zram_generator::config[1270]: No configuration found. May 8 23:55:48.183015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:55:48.218271 systemd[1]: Reloading finished in 165 ms. May 8 23:55:48.237875 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 23:55:48.246361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:55:48.253526 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 23:55:48.255875 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 23:55:48.257955 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 23:55:48.260759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:55:48.268634 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:55:48.278190 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 23:55:48.282033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:55:48.285474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:55:48.289786 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:55:48.292749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:55:48.294150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:55:48.297730 systemd-udevd[1312]: Using default interface naming scheme 'v255'. May 8 23:55:48.299861 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 23:55:48.302959 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 23:55:48.304879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:55:48.306209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:55:48.308094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:55:48.308247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:55:48.309801 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:55:48.309948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:55:48.316968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:55:48.328747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:55:48.335155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:55:48.338205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:55:48.341631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:55:48.344172 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 23:55:48.346601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:55:48.348500 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 23:55:48.352954 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 23:55:48.354405 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 23:55:48.357420 augenrules[1347]: No rules May 8 23:55:48.358405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:55:48.358566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:55:48.361463 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 23:55:48.362947 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 23:55:48.364413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:55:48.364972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:55:48.375105 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:55:48.376955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:55:48.385061 systemd[1]: Finished ensure-sysext.service. May 8 23:55:48.391132 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 23:55:48.391525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:55:48.398170 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:55:48.400083 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:55:48.401880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:55:48.403758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:55:48.404859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:55:48.418169 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:55:48.425432 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 23:55:48.426963 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1344) May 8 23:55:48.428083 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 23:55:48.428565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:55:48.429950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:55:48.431382 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:55:48.431513 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:55:48.432877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:55:48.433214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:55:48.434499 systemd-resolved[1311]: Positive Trust Anchors: May 8 23:55:48.434516 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:55:48.434548 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:55:48.434852 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:55:48.437000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:55:48.441565 systemd-resolved[1311]: Defaulting to hostname 'linux'. May 8 23:55:48.445254 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:55:48.454882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:55:48.455791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:55:48.468094 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 23:55:48.469640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:55:48.469713 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:55:48.485707 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 23:55:48.492646 systemd-networkd[1383]: lo: Link UP May 8 23:55:48.492653 systemd-networkd[1383]: lo: Gained carrier May 8 23:55:48.493733 systemd-networkd[1383]: Enumeration completed May 8 23:55:48.493847 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:55:48.494209 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:48.494213 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:55:48.494864 systemd-networkd[1383]: eth0: Link UP May 8 23:55:48.494868 systemd-networkd[1383]: eth0: Gained carrier May 8 23:55:48.494880 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:48.495089 systemd[1]: Reached target network.target - Network. May 8 23:55:48.506299 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 23:55:48.507625 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 23:55:48.510573 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:55:48.511217 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. May 8 23:55:48.512075 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 23:55:48.512121 systemd-timesyncd[1387]: Initial clock synchronization to Thu 2025-05-08 23:55:48.263845 UTC. May 8 23:55:48.513858 systemd[1]: Reached target time-set.target - System Time Set. May 8 23:55:48.525096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:48.528066 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 23:55:48.531834 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 23:55:48.555800 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:55:48.575962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:48.587352 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 23:55:48.588561 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:55:48.591021 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:55:48.591856 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 23:55:48.592774 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 23:55:48.593962 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 23:55:48.594799 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 23:55:48.595729 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 23:55:48.596608 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 23:55:48.596639 systemd[1]: Reached target paths.target - Path Units. May 8 23:55:48.597465 systemd[1]: Reached target timers.target - Timer Units. May 8 23:55:48.598810 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 23:55:48.601024 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 23:55:48.611801 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 23:55:48.613890 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 23:55:48.615124 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 23:55:48.615984 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:55:48.616646 systemd[1]: Reached target basic.target - Basic System. May 8 23:55:48.617359 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 23:55:48.617387 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 23:55:48.618207 systemd[1]: Starting containerd.service - containerd container runtime... May 8 23:55:48.619845 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 23:55:48.621368 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:55:48.623149 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 23:55:48.626166 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 23:55:48.630096 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 23:55:48.633110 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 23:55:48.635849 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 23:55:48.639172 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 23:55:48.642186 jq[1415]: false May 8 23:55:48.644227 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 23:55:48.646306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 23:55:48.646851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 23:55:48.648140 systemd[1]: Starting update-engine.service - Update Engine... May 8 23:55:48.653031 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 23:55:48.654900 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 23:55:48.660081 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 23:55:48.660251 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 23:55:48.662160 jq[1424]: true May 8 23:55:48.660575 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 23:55:48.660722 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 23:55:48.672072 dbus-daemon[1414]: [system] SELinux support is enabled May 8 23:55:48.673083 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 23:55:48.677309 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 23:55:48.677363 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 23:55:48.678804 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 23:55:48.678832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 23:55:48.686389 jq[1429]: true May 8 23:55:48.687943 extend-filesystems[1416]: Found loop3 May 8 23:55:48.687943 extend-filesystems[1416]: Found loop4 May 8 23:55:48.687943 extend-filesystems[1416]: Found loop5 May 8 23:55:48.687943 extend-filesystems[1416]: Found vda May 8 23:55:48.687943 extend-filesystems[1416]: Found vda1 May 8 23:55:48.687943 extend-filesystems[1416]: Found vda2 May 8 23:55:48.687943 extend-filesystems[1416]: Found vda3 May 8 23:55:48.687943 extend-filesystems[1416]: Found usr May 8 23:55:48.687943 extend-filesystems[1416]: Found vda4 May 8 23:55:48.687943 extend-filesystems[1416]: Found vda6 May 8 23:55:48.687943 extend-filesystems[1416]: Found vda7 May 8 23:55:48.687943 extend-filesystems[1416]: Found vda9 May 8 23:55:48.687943 extend-filesystems[1416]: Checking size of /dev/vda9 May 8 23:55:48.690735 systemd[1]: motdgen.service: Deactivated successfully. May 8 23:55:48.693076 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 23:55:48.698571 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 23:55:48.710924 update_engine[1421]: I20250508 23:55:48.710655 1421 main.cc:92] Flatcar Update Engine starting May 8 23:55:48.711112 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) May 8 23:55:48.712625 systemd[1]: Started update-engine.service - Update Engine. May 8 23:55:48.714044 update_engine[1421]: I20250508 23:55:48.712646 1421 update_check_scheduler.cc:74] Next update check in 2m52s May 8 23:55:48.714068 extend-filesystems[1416]: Resized partition /dev/vda9 May 8 23:55:48.719778 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) May 8 23:55:48.720588 systemd-logind[1420]: New seat seat0. May 8 23:55:48.733073 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 23:55:48.732214 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 23:55:48.734011 systemd[1]: Started systemd-logind.service - User Login Management. May 8 23:55:48.739931 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1353) May 8 23:55:48.761943 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 23:55:48.782286 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 23:55:48.782286 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 23:55:48.782286 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 23:55:48.786410 extend-filesystems[1416]: Resized filesystem in /dev/vda9 May 8 23:55:48.788650 bash[1464]: Updated "/home/core/.ssh/authorized_keys" May 8 23:55:48.788686 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 23:55:48.788923 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 23:55:48.791020 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 23:55:48.793379 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 23:55:48.797109 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 23:55:48.910929 containerd[1440]: time="2025-05-08T23:55:48.910837800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 23:55:48.933990 containerd[1440]: time="2025-05-08T23:55:48.933884120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:48.935363 containerd[1440]: time="2025-05-08T23:55:48.935305600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:48.935363 containerd[1440]: time="2025-05-08T23:55:48.935341040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 23:55:48.935363 containerd[1440]: time="2025-05-08T23:55:48.935356440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 23:55:48.935546 containerd[1440]: time="2025-05-08T23:55:48.935517760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 23:55:48.935546 containerd[1440]: time="2025-05-08T23:55:48.935541360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 23:55:48.935612 containerd[1440]: time="2025-05-08T23:55:48.935596520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:48.935632 containerd[1440]: time="2025-05-08T23:55:48.935613760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:48.935806 containerd[1440]: time="2025-05-08T23:55:48.935779880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:48.935806 containerd[1440]: time="2025-05-08T23:55:48.935800840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 23:55:48.935848 containerd[1440]: time="2025-05-08T23:55:48.935816800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:48.935848 containerd[1440]: time="2025-05-08T23:55:48.935827440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 23:55:48.935930 containerd[1440]: time="2025-05-08T23:55:48.935904520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:48.936147 containerd[1440]: time="2025-05-08T23:55:48.936126640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:48.936249 containerd[1440]: time="2025-05-08T23:55:48.936231960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:48.936272 containerd[1440]: time="2025-05-08T23:55:48.936255440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 23:55:48.936345 containerd[1440]: time="2025-05-08T23:55:48.936331600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 23:55:48.936387 containerd[1440]: time="2025-05-08T23:55:48.936374880Z" level=info msg="metadata content store policy set" policy=shared May 8 23:55:48.939726 containerd[1440]: time="2025-05-08T23:55:48.939650080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 23:55:48.939726 containerd[1440]: time="2025-05-08T23:55:48.939702040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 23:55:48.939726 containerd[1440]: time="2025-05-08T23:55:48.939721640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 23:55:48.939798 containerd[1440]: time="2025-05-08T23:55:48.939736200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 23:55:48.939798 containerd[1440]: time="2025-05-08T23:55:48.939749560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 23:55:48.939925 containerd[1440]: time="2025-05-08T23:55:48.939883120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 23:55:48.940142 containerd[1440]: time="2025-05-08T23:55:48.940121280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 23:55:48.940247 containerd[1440]: time="2025-05-08T23:55:48.940225280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 23:55:48.940247 containerd[1440]: time="2025-05-08T23:55:48.940246720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 23:55:48.940291 containerd[1440]: time="2025-05-08T23:55:48.940260040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 23:55:48.940291 containerd[1440]: time="2025-05-08T23:55:48.940273240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940291 containerd[1440]: time="2025-05-08T23:55:48.940285320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940352 containerd[1440]: time="2025-05-08T23:55:48.940299360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940352 containerd[1440]: time="2025-05-08T23:55:48.940314600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940352 containerd[1440]: time="2025-05-08T23:55:48.940328600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940352 containerd[1440]: time="2025-05-08T23:55:48.940344600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940418 containerd[1440]: time="2025-05-08T23:55:48.940356680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940418 containerd[1440]: time="2025-05-08T23:55:48.940369160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 23:55:48.940418 containerd[1440]: time="2025-05-08T23:55:48.940388520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940418 containerd[1440]: time="2025-05-08T23:55:48.940401960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940418 containerd[1440]: time="2025-05-08T23:55:48.940414520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940502 containerd[1440]: time="2025-05-08T23:55:48.940426960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940502 containerd[1440]: time="2025-05-08T23:55:48.940439000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940502 containerd[1440]: time="2025-05-08T23:55:48.940454040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940502 containerd[1440]: time="2025-05-08T23:55:48.940469720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940502 containerd[1440]: time="2025-05-08T23:55:48.940483240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940502 containerd[1440]: time="2025-05-08T23:55:48.940495880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940509160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940520360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940531240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940542680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940557720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940576120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940593640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 23:55:48.940605 containerd[1440]: time="2025-05-08T23:55:48.940604920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 23:55:48.941271 containerd[1440]: time="2025-05-08T23:55:48.941247720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 23:55:48.941306 containerd[1440]: time="2025-05-08T23:55:48.941281440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 23:55:48.941306 containerd[1440]: time="2025-05-08T23:55:48.941293280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 23:55:48.941354 containerd[1440]: time="2025-05-08T23:55:48.941305280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 23:55:48.941354 containerd[1440]: time="2025-05-08T23:55:48.941315560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 23:55:48.941354 containerd[1440]: time="2025-05-08T23:55:48.941332400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 23:55:48.941354 containerd[1440]: time="2025-05-08T23:55:48.941342400Z" level=info msg="NRI interface is disabled by configuration." May 8 23:55:48.941354 containerd[1440]: time="2025-05-08T23:55:48.941353440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 23:55:48.941761 containerd[1440]: time="2025-05-08T23:55:48.941702840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 23:55:48.941884 containerd[1440]: time="2025-05-08T23:55:48.941762200Z" level=info msg="Connect containerd service" May 8 23:55:48.941884 containerd[1440]: time="2025-05-08T23:55:48.941796080Z" level=info msg="using legacy CRI server" May 8 23:55:48.941884 containerd[1440]: time="2025-05-08T23:55:48.941802800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 23:55:48.942688 containerd[1440]: time="2025-05-08T23:55:48.941997560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 23:55:48.943416 containerd[1440]: time="2025-05-08T23:55:48.943387320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:55:48.943719 containerd[1440]: time="2025-05-08T23:55:48.943600840Z" level=info msg="Start subscribing containerd event" May 8 23:55:48.943719 containerd[1440]: time="2025-05-08T23:55:48.943658720Z" level=info msg="Start recovering state" May 8 23:55:48.943774 containerd[1440]: time="2025-05-08T23:55:48.943732920Z" level=info msg="Start event monitor" May 8 23:55:48.943774 containerd[1440]: time="2025-05-08T23:55:48.943745000Z" level=info msg="Start snapshots syncer" May 8 23:55:48.943774 containerd[1440]: time="2025-05-08T23:55:48.943754000Z" level=info msg="Start cni network conf syncer for default" May 8 23:55:48.943774 containerd[1440]: time="2025-05-08T23:55:48.943761320Z" level=info msg="Start streaming server" May 8 23:55:48.944113 containerd[1440]: time="2025-05-08T23:55:48.944091560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 23:55:48.944232 containerd[1440]: time="2025-05-08T23:55:48.944216760Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 23:55:48.947614 systemd[1]: Started containerd.service - containerd container runtime. May 8 23:55:48.949155 containerd[1440]: time="2025-05-08T23:55:48.949133520Z" level=info msg="containerd successfully booted in 0.040935s" May 8 23:55:49.098967 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 23:55:49.117092 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 23:55:49.125319 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 23:55:49.130045 systemd[1]: issuegen.service: Deactivated successfully. May 8 23:55:49.131017 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 23:55:49.133871 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 23:55:49.145342 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 23:55:49.159304 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 23:55:49.161271 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 23:55:49.162244 systemd[1]: Reached target getty.target - Login Prompts. May 8 23:55:49.711082 systemd-networkd[1383]: eth0: Gained IPv6LL May 8 23:55:49.713770 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 23:55:49.715594 systemd[1]: Reached target network-online.target - Network is Online. May 8 23:55:49.727169 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 23:55:49.729193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:49.731224 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 23:55:49.745138 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 23:55:49.745832 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 23:55:49.748038 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 23:55:49.750230 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 23:55:50.204971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:50.206417 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 23:55:50.207512 systemd[1]: Startup finished in 540ms (kernel) + 4.346s (initrd) + 3.192s (userspace) = 8.079s. May 8 23:55:50.208971 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:55:50.658812 kubelet[1521]: E0508 23:55:50.658705 1521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:55:50.661258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:55:50.661404 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:55:55.897574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 23:55:55.920416 systemd[1]: Started sshd@0-10.0.0.17:22-10.0.0.1:49408.service - OpenSSH per-connection server daemon (10.0.0.1:49408). May 8 23:55:55.975187 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 49408 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:55.977338 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:55.988648 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 23:55:55.998222 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 23:55:56.000387 systemd-logind[1420]: New session 1 of user core. May 8 23:55:56.008988 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 23:55:56.011371 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 23:55:56.017446 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 23:55:56.090943 systemd[1540]: Queued start job for default target default.target. May 8 23:55:56.099807 systemd[1540]: Created slice app.slice - User Application Slice. May 8 23:55:56.099836 systemd[1540]: Reached target paths.target - Paths. May 8 23:55:56.099848 systemd[1540]: Reached target timers.target - Timers. May 8 23:55:56.101307 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 23:55:56.113820 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 23:55:56.113960 systemd[1540]: Reached target sockets.target - Sockets. May 8 23:55:56.113979 systemd[1540]: Reached target basic.target - Basic System. May 8 23:55:56.114016 systemd[1540]: Reached target default.target - Main User Target. May 8 23:55:56.114043 systemd[1540]: Startup finished in 91ms. May 8 23:55:56.114187 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 23:55:56.115507 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 23:55:56.173193 systemd[1]: Started sshd@1-10.0.0.17:22-10.0.0.1:49414.service - OpenSSH per-connection server daemon (10.0.0.1:49414). May 8 23:55:56.205925 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 49414 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:56.207220 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:56.212725 systemd-logind[1420]: New session 2 of user core. May 8 23:55:56.223718 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 23:55:56.276181 sshd[1551]: pam_unix(sshd:session): session closed for user core May 8 23:55:56.285317 systemd[1]: sshd@1-10.0.0.17:22-10.0.0.1:49414.service: Deactivated successfully. May 8 23:55:56.288072 systemd[1]: session-2.scope: Deactivated successfully. May 8 23:55:56.289760 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. May 8 23:55:56.299358 systemd[1]: Started sshd@2-10.0.0.17:22-10.0.0.1:49424.service - OpenSSH per-connection server daemon (10.0.0.1:49424). May 8 23:55:56.300565 systemd-logind[1420]: Removed session 2. May 8 23:55:56.329116 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 49424 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:56.330346 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:56.334356 systemd-logind[1420]: New session 3 of user core. May 8 23:55:56.347073 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 23:55:56.393685 sshd[1558]: pam_unix(sshd:session): session closed for user core May 8 23:55:56.402130 systemd[1]: sshd@2-10.0.0.17:22-10.0.0.1:49424.service: Deactivated successfully. May 8 23:55:56.404131 systemd[1]: session-3.scope: Deactivated successfully. May 8 23:55:56.405621 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. May 8 23:55:56.415217 systemd[1]: Started sshd@3-10.0.0.17:22-10.0.0.1:49440.service - OpenSSH per-connection server daemon (10.0.0.1:49440). May 8 23:55:56.419882 systemd-logind[1420]: Removed session 3. May 8 23:55:56.446432 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 49440 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:56.447640 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:56.451996 systemd-logind[1420]: New session 4 of user core. May 8 23:55:56.460055 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 23:55:56.514415 sshd[1565]: pam_unix(sshd:session): session closed for user core May 8 23:55:56.524580 systemd[1]: sshd@3-10.0.0.17:22-10.0.0.1:49440.service: Deactivated successfully. May 8 23:55:56.526105 systemd[1]: session-4.scope: Deactivated successfully. May 8 23:55:56.527835 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. May 8 23:55:56.528985 systemd[1]: Started sshd@4-10.0.0.17:22-10.0.0.1:49452.service - OpenSSH per-connection server daemon (10.0.0.1:49452). May 8 23:55:56.532698 systemd-logind[1420]: Removed session 4. May 8 23:55:56.565620 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 49452 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:56.566965 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:56.571300 systemd-logind[1420]: New session 5 of user core. May 8 23:55:56.584095 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 23:55:56.659873 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 23:55:56.660186 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:56.674971 sudo[1575]: pam_unix(sudo:session): session closed for user root May 8 23:55:56.677056 sshd[1572]: pam_unix(sshd:session): session closed for user core May 8 23:55:56.684430 systemd[1]: sshd@4-10.0.0.17:22-10.0.0.1:49452.service: Deactivated successfully. May 8 23:55:56.686408 systemd[1]: session-5.scope: Deactivated successfully. May 8 23:55:56.688079 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. May 8 23:55:56.699262 systemd[1]: Started sshd@5-10.0.0.17:22-10.0.0.1:49456.service - OpenSSH per-connection server daemon (10.0.0.1:49456). May 8 23:55:56.705006 systemd-logind[1420]: Removed session 5. May 8 23:55:56.734990 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 49456 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:56.736270 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:56.740483 systemd-logind[1420]: New session 6 of user core. May 8 23:55:56.750055 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 23:55:56.800696 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 23:55:56.801006 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:56.803991 sudo[1584]: pam_unix(sudo:session): session closed for user root May 8 23:55:56.808699 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 23:55:56.811957 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:56.840188 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 23:55:56.842101 auditctl[1587]: No rules May 8 23:55:56.842791 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:55:56.843966 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 23:55:56.845831 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 23:55:56.871320 augenrules[1605]: No rules May 8 23:55:56.873968 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 23:55:56.875217 sudo[1583]: pam_unix(sudo:session): session closed for user root May 8 23:55:56.877118 sshd[1580]: pam_unix(sshd:session): session closed for user core May 8 23:55:56.888111 systemd[1]: sshd@5-10.0.0.17:22-10.0.0.1:49456.service: Deactivated successfully. May 8 23:55:56.889384 systemd[1]: session-6.scope: Deactivated successfully. May 8 23:55:56.892378 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. May 8 23:55:56.899199 systemd[1]: Started sshd@6-10.0.0.17:22-10.0.0.1:49464.service - OpenSSH per-connection server daemon (10.0.0.1:49464). May 8 23:55:56.900012 systemd-logind[1420]: Removed session 6. May 8 23:55:56.944290 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 49464 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:56.945741 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:56.949878 systemd-logind[1420]: New session 7 of user core. May 8 23:55:56.957080 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 23:55:57.007852 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 23:55:57.008461 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:57.025230 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 23:55:57.041177 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 23:55:57.041364 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 23:55:57.536542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:57.550132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:57.563730 systemd[1]: Reloading requested from client PID 1665 ('systemctl') (unit session-7.scope)... May 8 23:55:57.563746 systemd[1]: Reloading... May 8 23:55:57.633013 zram_generator::config[1706]: No configuration found. May 8 23:55:57.855816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:55:57.908297 systemd[1]: Reloading finished in 344 ms. May 8 23:55:57.945182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:57.948523 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:55:57.948728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:57.950547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:58.044976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:58.048973 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:55:58.086955 kubelet[1750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:55:58.086955 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 23:55:58.086955 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:55:58.088115 kubelet[1750]: I0508 23:55:58.088028 1750 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:55:58.551225 kubelet[1750]: I0508 23:55:58.551181 1750 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 23:55:58.551225 kubelet[1750]: I0508 23:55:58.551211 1750 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:55:58.551496 kubelet[1750]: I0508 23:55:58.551467 1750 server.go:927] "Client rotation is on, will bootstrap in background" May 8 23:55:58.606217 kubelet[1750]: I0508 23:55:58.606176 1750 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:55:58.616226 kubelet[1750]: I0508 23:55:58.616201 1750 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:55:58.617332 kubelet[1750]: I0508 23:55:58.617290 1750 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:55:58.617500 kubelet[1750]: I0508 23:55:58.617340 1750 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.17","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 23:55:58.617582 kubelet[1750]: I0508 23:55:58.617562 1750 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:55:58.617582 kubelet[1750]: I0508 23:55:58.617571 1750 container_manager_linux.go:301] "Creating device plugin manager" May 8 23:55:58.617838 kubelet[1750]: I0508 23:55:58.617815 1750 state_mem.go:36] "Initialized new in-memory state store" May 8 23:55:58.618879 kubelet[1750]: I0508 23:55:58.618862 1750 kubelet.go:400] "Attempting to sync node with API server" May 8 23:55:58.618954 kubelet[1750]: I0508 23:55:58.618885 1750 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:55:58.619092 kubelet[1750]: I0508 23:55:58.618993 1750 kubelet.go:312] "Adding apiserver pod source" May 8 23:55:58.619150 kubelet[1750]: I0508 23:55:58.619134 1750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:55:58.619526 kubelet[1750]: E0508 23:55:58.619250 1750 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:55:58.619526 kubelet[1750]: E0508 23:55:58.619315 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:55:58.621799 kubelet[1750]: I0508 23:55:58.621774 1750 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 23:55:58.622295 kubelet[1750]: I0508 23:55:58.622277 1750 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:55:58.622452 kubelet[1750]: W0508 23:55:58.622440 1750 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 23:55:58.625239 kubelet[1750]: I0508 23:55:58.625216 1750 server.go:1264] "Started kubelet" May 8 23:55:58.626573 kubelet[1750]: I0508 23:55:58.626429 1750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:55:58.627228 kubelet[1750]: I0508 23:55:58.626985 1750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:55:58.627228 kubelet[1750]: I0508 23:55:58.627056 1750 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:55:58.627228 kubelet[1750]: I0508 23:55:58.627096 1750 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:55:58.628772 kubelet[1750]: W0508 23:55:58.628733 1750 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 23:55:58.628908 kubelet[1750]: E0508 23:55:58.628885 1750 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 23:55:58.629079 kubelet[1750]: W0508 23:55:58.629054 1750 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 23:55:58.629121 kubelet[1750]: E0508 23:55:58.629080 1750 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.17" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 23:55:58.629561 kubelet[1750]: I0508 23:55:58.629542 1750 server.go:455] "Adding debug handlers to kubelet server" May 8 23:55:58.631249 kubelet[1750]: E0508 23:55:58.631227 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:58.631538 kubelet[1750]: I0508 23:55:58.631527 1750 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 23:55:58.631792 kubelet[1750]: I0508 23:55:58.631659 1750 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:55:58.633131 kubelet[1750]: I0508 23:55:58.633111 1750 reconciler.go:26] "Reconciler: start to sync state" May 8 23:55:58.633447 kubelet[1750]: E0508 23:55:58.633423 1750 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:55:58.634420 kubelet[1750]: I0508 23:55:58.634072 1750 factory.go:221] Registration of the systemd container factory successfully May 8 23:55:58.634420 kubelet[1750]: I0508 23:55:58.634196 1750 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:55:58.635920 kubelet[1750]: I0508 23:55:58.635894 1750 factory.go:221] Registration of the containerd container factory successfully May 8 23:55:58.641446 kubelet[1750]: E0508 23:55:58.641408 1750 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.17\" not found" node="10.0.0.17" May 8 23:55:58.645216 kubelet[1750]: I0508 23:55:58.645192 1750 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 23:55:58.645216 kubelet[1750]: I0508 23:55:58.645209 1750 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 23:55:58.645331 kubelet[1750]: I0508 23:55:58.645233 1750 state_mem.go:36] "Initialized new in-memory state store" May 8 23:55:58.733321 kubelet[1750]: I0508 23:55:58.733287 1750 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.17" May 8 23:55:58.742694 kubelet[1750]: I0508 23:55:58.742661 1750 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.17" May 8 23:55:58.769467 kubelet[1750]: I0508 23:55:58.769425 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:55:58.770484 kubelet[1750]: I0508 23:55:58.770464 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:55:58.770726 kubelet[1750]: I0508 23:55:58.770714 1750 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 23:55:58.770964 kubelet[1750]: I0508 23:55:58.770950 1750 kubelet.go:2337] "Starting kubelet main sync loop" May 8 23:55:58.771483 kubelet[1750]: E0508 23:55:58.771456 1750 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:55:58.802148 kubelet[1750]: I0508 23:55:58.802042 1750 policy_none.go:49] "None policy: Start" May 8 23:55:58.803138 kubelet[1750]: I0508 23:55:58.803119 1750 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 23:55:58.803207 kubelet[1750]: I0508 23:55:58.803145 1750 state_mem.go:35] "Initializing new in-memory state store" May 8 23:55:58.826283 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 23:55:58.832552 kubelet[1750]: E0508 23:55:58.832490 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:58.841534 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 23:55:58.844266 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 23:55:58.854811 kubelet[1750]: I0508 23:55:58.854781 1750 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:55:58.855278 kubelet[1750]: I0508 23:55:58.854999 1750 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:55:58.855278 kubelet[1750]: I0508 23:55:58.855114 1750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:55:58.856178 kubelet[1750]: E0508 23:55:58.856150 1750 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.17\" not found" May 8 23:55:58.933506 kubelet[1750]: E0508 23:55:58.933470 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:59.033911 kubelet[1750]: E0508 23:55:59.033880 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:59.091937 sudo[1616]: pam_unix(sudo:session): session closed for user root May 8 23:55:59.093710 sshd[1613]: pam_unix(sshd:session): session closed for user core May 8 23:55:59.096955 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. May 8 23:55:59.101994 systemd[1]: sshd@6-10.0.0.17:22-10.0.0.1:49464.service: Deactivated successfully. May 8 23:55:59.104657 systemd[1]: session-7.scope: Deactivated successfully. May 8 23:55:59.105947 systemd-logind[1420]: Removed session 7. May 8 23:55:59.134948 kubelet[1750]: E0508 23:55:59.134869 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:59.235263 kubelet[1750]: E0508 23:55:59.235199 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:59.335766 kubelet[1750]: E0508 23:55:59.335723 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:59.436307 kubelet[1750]: E0508 23:55:59.436209 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:59.536727 kubelet[1750]: E0508 23:55:59.536687 1750 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" May 8 23:55:59.553829 kubelet[1750]: I0508 23:55:59.553780 1750 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 8 23:55:59.553950 kubelet[1750]: W0508 23:55:59.553936 1750 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 23:55:59.553978 kubelet[1750]: W0508 23:55:59.553957 1750 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 23:55:59.554048 kubelet[1750]: W0508 23:55:59.554019 1750 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 23:55:59.620138 kubelet[1750]: E0508 23:55:59.620062 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:55:59.620201 kubelet[1750]: I0508 23:55:59.620182 1750 apiserver.go:52] "Watching apiserver" May 8 23:55:59.631899 kubelet[1750]: I0508 23:55:59.631847 1750 topology_manager.go:215] "Topology Admit Handler" podUID="8437f920-c74a-4eef-bd78-d7bcd92198eb" podNamespace="calico-system" podName="calico-node-j9dpd" May 8 23:55:59.631997 kubelet[1750]: I0508 23:55:59.631968 1750 topology_manager.go:215] "Topology Admit Handler" podUID="9b50feea-d4da-4a15-b913-3108b0a5c9cd" podNamespace="calico-system" podName="csi-node-driver-nt6hn" May 8 23:55:59.632086 kubelet[1750]: I0508 23:55:59.632060 1750 topology_manager.go:215] "Topology Admit Handler" podUID="51a9bffe-6eb8-4371-9e40-d754711631fc" podNamespace="kube-system" podName="kube-proxy-hrl2m" May 8 23:55:59.632293 kubelet[1750]: E0508 23:55:59.632254 1750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nt6hn" podUID="9b50feea-d4da-4a15-b913-3108b0a5c9cd" May 8 23:55:59.637550 kubelet[1750]: I0508 23:55:59.637446 1750 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 8 23:55:59.638171 containerd[1440]: time="2025-05-08T23:55:59.638103782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 23:55:59.638433 kubelet[1750]: I0508 23:55:59.638261 1750 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 8 23:55:59.640815 systemd[1]: Created slice kubepods-besteffort-pod51a9bffe_6eb8_4371_9e40_d754711631fc.slice - libcontainer container kubepods-besteffort-pod51a9bffe_6eb8_4371_9e40_d754711631fc.slice. May 8 23:55:59.661325 systemd[1]: Created slice kubepods-besteffort-pod8437f920_c74a_4eef_bd78_d7bcd92198eb.slice - libcontainer container kubepods-besteffort-pod8437f920_c74a_4eef_bd78_d7bcd92198eb.slice. May 8 23:55:59.732683 kubelet[1750]: I0508 23:55:59.732579 1750 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:55:59.738449 kubelet[1750]: I0508 23:55:59.738413 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-lib-modules\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738502 kubelet[1750]: I0508 23:55:59.738452 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-policysync\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738502 kubelet[1750]: I0508 23:55:59.738472 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8437f920-c74a-4eef-bd78-d7bcd92198eb-node-certs\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738502 kubelet[1750]: I0508 23:55:59.738488 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-cni-bin-dir\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738502 kubelet[1750]: I0508 23:55:59.738502 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-cni-net-dir\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738728 kubelet[1750]: I0508 23:55:59.738519 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-flexvol-driver-host\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738759 kubelet[1750]: I0508 23:55:59.738738 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51a9bffe-6eb8-4371-9e40-d754711631fc-kube-proxy\") pod \"kube-proxy-hrl2m\" (UID: \"51a9bffe-6eb8-4371-9e40-d754711631fc\") " pod="kube-system/kube-proxy-hrl2m" May 8 23:55:59.738780 kubelet[1750]: I0508 23:55:59.738764 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a9bffe-6eb8-4371-9e40-d754711631fc-lib-modules\") pod \"kube-proxy-hrl2m\" (UID: \"51a9bffe-6eb8-4371-9e40-d754711631fc\") " pod="kube-system/kube-proxy-hrl2m" May 8 23:55:59.738804 kubelet[1750]: I0508 23:55:59.738786 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlt6j\" (UniqueName: \"kubernetes.io/projected/8437f920-c74a-4eef-bd78-d7bcd92198eb-kube-api-access-hlt6j\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738829 kubelet[1750]: I0508 23:55:59.738805 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b50feea-d4da-4a15-b913-3108b0a5c9cd-kubelet-dir\") pod \"csi-node-driver-nt6hn\" (UID: \"9b50feea-d4da-4a15-b913-3108b0a5c9cd\") " pod="calico-system/csi-node-driver-nt6hn" May 8 23:55:59.738852 kubelet[1750]: I0508 23:55:59.738834 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b50feea-d4da-4a15-b913-3108b0a5c9cd-socket-dir\") pod \"csi-node-driver-nt6hn\" (UID: \"9b50feea-d4da-4a15-b913-3108b0a5c9cd\") " pod="calico-system/csi-node-driver-nt6hn" May 8 23:55:59.738874 kubelet[1750]: I0508 23:55:59.738850 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-xtables-lock\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738874 kubelet[1750]: I0508 23:55:59.738869 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8437f920-c74a-4eef-bd78-d7bcd92198eb-tigera-ca-bundle\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738934 kubelet[1750]: I0508 23:55:59.738888 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-var-run-calico\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738934 kubelet[1750]: I0508 23:55:59.738908 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-var-lib-calico\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.738985 kubelet[1750]: I0508 23:55:59.738947 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b50feea-d4da-4a15-b913-3108b0a5c9cd-registration-dir\") pod \"csi-node-driver-nt6hn\" (UID: \"9b50feea-d4da-4a15-b913-3108b0a5c9cd\") " pod="calico-system/csi-node-driver-nt6hn" May 8 23:55:59.738985 kubelet[1750]: I0508 23:55:59.738965 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg89g\" (UniqueName: \"kubernetes.io/projected/9b50feea-d4da-4a15-b913-3108b0a5c9cd-kube-api-access-jg89g\") pod \"csi-node-driver-nt6hn\" (UID: \"9b50feea-d4da-4a15-b913-3108b0a5c9cd\") " pod="calico-system/csi-node-driver-nt6hn" May 8 23:55:59.739027 kubelet[1750]: I0508 23:55:59.738985 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a9bffe-6eb8-4371-9e40-d754711631fc-xtables-lock\") pod \"kube-proxy-hrl2m\" (UID: \"51a9bffe-6eb8-4371-9e40-d754711631fc\") " pod="kube-system/kube-proxy-hrl2m" May 8 23:55:59.739027 kubelet[1750]: I0508 23:55:59.739004 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8437f920-c74a-4eef-bd78-d7bcd92198eb-cni-log-dir\") pod \"calico-node-j9dpd\" (UID: \"8437f920-c74a-4eef-bd78-d7bcd92198eb\") " pod="calico-system/calico-node-j9dpd" May 8 23:55:59.739027 kubelet[1750]: I0508 23:55:59.739023 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b50feea-d4da-4a15-b913-3108b0a5c9cd-varrun\") pod \"csi-node-driver-nt6hn\" (UID: \"9b50feea-d4da-4a15-b913-3108b0a5c9cd\") " pod="calico-system/csi-node-driver-nt6hn" May 8 23:55:59.739086 kubelet[1750]: I0508 23:55:59.739042 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj9ld\" (UniqueName: \"kubernetes.io/projected/51a9bffe-6eb8-4371-9e40-d754711631fc-kube-api-access-jj9ld\") pod \"kube-proxy-hrl2m\" (UID: \"51a9bffe-6eb8-4371-9e40-d754711631fc\") " pod="kube-system/kube-proxy-hrl2m" May 8 23:55:59.845601 kubelet[1750]: E0508 23:55:59.845295 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.845601 kubelet[1750]: W0508 23:55:59.845317 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.845601 kubelet[1750]: E0508 23:55:59.845338 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.845601 kubelet[1750]: E0508 23:55:59.845509 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.845601 kubelet[1750]: W0508 23:55:59.845526 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.845601 kubelet[1750]: E0508 23:55:59.845543 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.845812 kubelet[1750]: E0508 23:55:59.845653 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.845812 kubelet[1750]: W0508 23:55:59.845660 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.845812 kubelet[1750]: E0508 23:55:59.845678 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.846417 kubelet[1750]: E0508 23:55:59.845901 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.846417 kubelet[1750]: W0508 23:55:59.845935 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.846417 kubelet[1750]: E0508 23:55:59.845946 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.846417 kubelet[1750]: E0508 23:55:59.846105 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.846417 kubelet[1750]: W0508 23:55:59.846112 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.846417 kubelet[1750]: E0508 23:55:59.846119 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.846417 kubelet[1750]: E0508 23:55:59.846377 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.846417 kubelet[1750]: W0508 23:55:59.846389 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.846417 kubelet[1750]: E0508 23:55:59.846407 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.846685 kubelet[1750]: E0508 23:55:59.846615 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.846685 kubelet[1750]: W0508 23:55:59.846623 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.846685 kubelet[1750]: E0508 23:55:59.846636 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.846787 kubelet[1750]: E0508 23:55:59.846777 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.846787 kubelet[1750]: W0508 23:55:59.846788 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.846839 kubelet[1750]: E0508 23:55:59.846799 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.846980 kubelet[1750]: E0508 23:55:59.846969 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.846980 kubelet[1750]: W0508 23:55:59.846980 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.847047 kubelet[1750]: E0508 23:55:59.846989 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.856180 kubelet[1750]: E0508 23:55:59.856135 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.856180 kubelet[1750]: W0508 23:55:59.856158 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.856311 kubelet[1750]: E0508 23:55:59.856196 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.856735 kubelet[1750]: E0508 23:55:59.856433 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.856735 kubelet[1750]: W0508 23:55:59.856446 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.856735 kubelet[1750]: E0508 23:55:59.856455 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.861337 kubelet[1750]: E0508 23:55:59.861302 1750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 23:55:59.861337 kubelet[1750]: W0508 23:55:59.861324 1750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 23:55:59.861337 kubelet[1750]: E0508 23:55:59.861341 1750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 23:55:59.961008 kubelet[1750]: E0508 23:55:59.960971 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:59.961837 containerd[1440]: time="2025-05-08T23:55:59.961753752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrl2m,Uid:51a9bffe-6eb8-4371-9e40-d754711631fc,Namespace:kube-system,Attempt:0,}" May 8 23:55:59.964000 kubelet[1750]: E0508 23:55:59.963949 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:59.964511 containerd[1440]: time="2025-05-08T23:55:59.964439296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j9dpd,Uid:8437f920-c74a-4eef-bd78-d7bcd92198eb,Namespace:calico-system,Attempt:0,}" May 8 23:56:00.532646 containerd[1440]: time="2025-05-08T23:56:00.532586367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:56:00.533882 containerd[1440]: time="2025-05-08T23:56:00.533835385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:56:00.535068 containerd[1440]: time="2025-05-08T23:56:00.535033846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:56:00.536177 containerd[1440]: time="2025-05-08T23:56:00.536142910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 23:56:00.536970 containerd[1440]: time="2025-05-08T23:56:00.536904274Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:56:00.540104 containerd[1440]: time="2025-05-08T23:56:00.540048700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:56:00.541004 containerd[1440]: time="2025-05-08T23:56:00.540968604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.135374ms" May 8 23:56:00.546235 containerd[1440]: time="2025-05-08T23:56:00.545253111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.663356ms" May 8 23:56:00.623873 kubelet[1750]: E0508 23:56:00.620656 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:00.649423 containerd[1440]: time="2025-05-08T23:56:00.648438068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:00.649423 containerd[1440]: time="2025-05-08T23:56:00.648874489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:00.649423 containerd[1440]: time="2025-05-08T23:56:00.648936960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:00.649423 containerd[1440]: time="2025-05-08T23:56:00.649084142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:00.652005 containerd[1440]: time="2025-05-08T23:56:00.651265454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:00.652005 containerd[1440]: time="2025-05-08T23:56:00.651326297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:00.652005 containerd[1440]: time="2025-05-08T23:56:00.651337377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:00.652005 containerd[1440]: time="2025-05-08T23:56:00.651420499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:00.742096 systemd[1]: Started cri-containerd-9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711.scope - libcontainer container 9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711. May 8 23:56:00.745147 systemd[1]: Started cri-containerd-2094053bfade46bbeb05993435bcd128eee1edf6de972dc20659c0a5b8673e6b.scope - libcontainer container 2094053bfade46bbeb05993435bcd128eee1edf6de972dc20659c0a5b8673e6b. May 8 23:56:00.764463 containerd[1440]: time="2025-05-08T23:56:00.764391676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j9dpd,Uid:8437f920-c74a-4eef-bd78-d7bcd92198eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711\"" May 8 23:56:00.766099 kubelet[1750]: E0508 23:56:00.765751 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:00.772376 containerd[1440]: time="2025-05-08T23:56:00.772323075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrl2m,Uid:51a9bffe-6eb8-4371-9e40-d754711631fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2094053bfade46bbeb05993435bcd128eee1edf6de972dc20659c0a5b8673e6b\"" May 8 23:56:00.773606 containerd[1440]: time="2025-05-08T23:56:00.773549415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 23:56:00.773734 kubelet[1750]: E0508 23:56:00.773706 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:00.848362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035047776.mount: Deactivated successfully. May 8 23:56:01.621575 kubelet[1750]: E0508 23:56:01.621530 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:01.684209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618011672.mount: Deactivated successfully. May 8 23:56:01.738338 containerd[1440]: time="2025-05-08T23:56:01.738161684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:01.739378 containerd[1440]: time="2025-05-08T23:56:01.739329740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6492223" May 8 23:56:01.740221 containerd[1440]: time="2025-05-08T23:56:01.740175900Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:01.741969 containerd[1440]: time="2025-05-08T23:56:01.741895648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:01.742704 containerd[1440]: time="2025-05-08T23:56:01.742669781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 969.077861ms" May 8 23:56:01.742765 containerd[1440]: time="2025-05-08T23:56:01.742704562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 23:56:01.744451 containerd[1440]: time="2025-05-08T23:56:01.744278467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 23:56:01.745138 containerd[1440]: time="2025-05-08T23:56:01.745088454Z" level=info msg="CreateContainer within sandbox \"9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 23:56:01.758144 containerd[1440]: time="2025-05-08T23:56:01.758096073Z" level=info msg="CreateContainer within sandbox \"9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3\"" May 8 23:56:01.759307 containerd[1440]: time="2025-05-08T23:56:01.758728378Z" level=info msg="StartContainer for \"e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3\"" May 8 23:56:01.771744 kubelet[1750]: E0508 23:56:01.771692 1750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nt6hn" podUID="9b50feea-d4da-4a15-b913-3108b0a5c9cd" May 8 23:56:01.786121 systemd[1]: Started cri-containerd-e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3.scope - libcontainer container e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3. May 8 23:56:01.812255 containerd[1440]: time="2025-05-08T23:56:01.812216609Z" level=info msg="StartContainer for \"e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3\" returns successfully" May 8 23:56:01.840872 systemd[1]: cri-containerd-e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3.scope: Deactivated successfully. May 8 23:56:01.863427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3-rootfs.mount: Deactivated successfully. May 8 23:56:01.891781 containerd[1440]: time="2025-05-08T23:56:01.891651868Z" level=info msg="shim disconnected" id=e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3 namespace=k8s.io May 8 23:56:01.891781 containerd[1440]: time="2025-05-08T23:56:01.891711175Z" level=warning msg="cleaning up after shim disconnected" id=e553db3640874a54c139e034e668d8c70b5b84ed5e4f584aec762d0535a2ddd3 namespace=k8s.io May 8 23:56:01.891781 containerd[1440]: time="2025-05-08T23:56:01.891720755Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:02.622600 kubelet[1750]: E0508 23:56:02.622553 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:02.744099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376171333.mount: Deactivated successfully. May 8 23:56:02.786428 kubelet[1750]: E0508 23:56:02.786247 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:02.967896 containerd[1440]: time="2025-05-08T23:56:02.967778963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:02.969070 containerd[1440]: time="2025-05-08T23:56:02.968889141Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 8 23:56:02.969946 containerd[1440]: time="2025-05-08T23:56:02.969904161Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:02.972432 containerd[1440]: time="2025-05-08T23:56:02.972377684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:02.973002 containerd[1440]: time="2025-05-08T23:56:02.972966408Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.228650648s" May 8 23:56:02.973067 containerd[1440]: time="2025-05-08T23:56:02.973002649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 23:56:02.974935 containerd[1440]: time="2025-05-08T23:56:02.974842177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 23:56:02.975754 containerd[1440]: time="2025-05-08T23:56:02.975718600Z" level=info msg="CreateContainer within sandbox \"2094053bfade46bbeb05993435bcd128eee1edf6de972dc20659c0a5b8673e6b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 23:56:02.986470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936213628.mount: Deactivated successfully. May 8 23:56:02.989313 containerd[1440]: time="2025-05-08T23:56:02.989266965Z" level=info msg="CreateContainer within sandbox \"2094053bfade46bbeb05993435bcd128eee1edf6de972dc20659c0a5b8673e6b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b04180c9267b51b6defc82721a7532f06498e23b7ace94c3ed7d3869881b8ab3\"" May 8 23:56:02.989894 containerd[1440]: time="2025-05-08T23:56:02.989864839Z" level=info msg="StartContainer for \"b04180c9267b51b6defc82721a7532f06498e23b7ace94c3ed7d3869881b8ab3\"" May 8 23:56:03.019151 systemd[1]: Started cri-containerd-b04180c9267b51b6defc82721a7532f06498e23b7ace94c3ed7d3869881b8ab3.scope - libcontainer container b04180c9267b51b6defc82721a7532f06498e23b7ace94c3ed7d3869881b8ab3. May 8 23:56:03.045743 containerd[1440]: time="2025-05-08T23:56:03.045676430Z" level=info msg="StartContainer for \"b04180c9267b51b6defc82721a7532f06498e23b7ace94c3ed7d3869881b8ab3\" returns successfully" May 8 23:56:03.623421 kubelet[1750]: E0508 23:56:03.623362 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:03.772381 kubelet[1750]: E0508 23:56:03.772340 1750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nt6hn" podUID="9b50feea-d4da-4a15-b913-3108b0a5c9cd" May 8 23:56:03.788639 kubelet[1750]: E0508 23:56:03.788609 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:03.805152 kubelet[1750]: I0508 23:56:03.804973 1750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrl2m" podStartSLOduration=3.605095764 podStartE2EDuration="5.804952636s" podCreationTimestamp="2025-05-08 23:55:58 +0000 UTC" firstStartedPulling="2025-05-08 23:56:00.774149539 +0000 UTC m=+2.722010325" lastFinishedPulling="2025-05-08 23:56:02.974006372 +0000 UTC m=+4.921867197" observedRunningTime="2025-05-08 23:56:03.804770352 +0000 UTC m=+5.752631138" watchObservedRunningTime="2025-05-08 23:56:03.804952636 +0000 UTC m=+5.752813462" May 8 23:56:04.623670 kubelet[1750]: E0508 23:56:04.623628 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:04.790505 kubelet[1750]: E0508 23:56:04.790465 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:05.624761 kubelet[1750]: E0508 23:56:05.624658 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:05.772999 kubelet[1750]: E0508 23:56:05.772606 1750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nt6hn" podUID="9b50feea-d4da-4a15-b913-3108b0a5c9cd" May 8 23:56:05.889587 containerd[1440]: time="2025-05-08T23:56:05.889443673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:05.890205 containerd[1440]: time="2025-05-08T23:56:05.890162192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 8 23:56:05.891411 containerd[1440]: time="2025-05-08T23:56:05.891371387Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:05.893773 containerd[1440]: time="2025-05-08T23:56:05.893728124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:05.894600 containerd[1440]: time="2025-05-08T23:56:05.894568874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.9196937s" May 8 23:56:05.894654 containerd[1440]: time="2025-05-08T23:56:05.894606217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 23:56:05.896904 containerd[1440]: time="2025-05-08T23:56:05.896863081Z" level=info msg="CreateContainer within sandbox \"9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 23:56:05.909677 containerd[1440]: time="2025-05-08T23:56:05.909538288Z" level=info msg="CreateContainer within sandbox \"9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931\"" May 8 23:56:05.910369 containerd[1440]: time="2025-05-08T23:56:05.910337989Z" level=info msg="StartContainer for \"b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931\"" May 8 23:56:05.941084 systemd[1]: Started cri-containerd-b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931.scope - libcontainer container b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931. May 8 23:56:05.962742 containerd[1440]: time="2025-05-08T23:56:05.962686205Z" level=info msg="StartContainer for \"b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931\" returns successfully" May 8 23:56:06.493865 containerd[1440]: time="2025-05-08T23:56:06.493805252Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:56:06.496165 systemd[1]: cri-containerd-b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931.scope: Deactivated successfully. May 8 23:56:06.497645 kubelet[1750]: I0508 23:56:06.497619 1750 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 23:56:06.514741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931-rootfs.mount: Deactivated successfully. May 8 23:56:06.625158 kubelet[1750]: E0508 23:56:06.625114 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:06.781484 containerd[1440]: time="2025-05-08T23:56:06.781352539Z" level=info msg="shim disconnected" id=b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931 namespace=k8s.io May 8 23:56:06.781484 containerd[1440]: time="2025-05-08T23:56:06.781407163Z" level=warning msg="cleaning up after shim disconnected" id=b48a9b78691bd5c3ec8094abadf2e6d83bdd198c6943da0e95dadc2af205a931 namespace=k8s.io May 8 23:56:06.781484 containerd[1440]: time="2025-05-08T23:56:06.781415417Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:06.796286 kubelet[1750]: E0508 23:56:06.796173 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:06.797267 containerd[1440]: time="2025-05-08T23:56:06.797051030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 23:56:07.627776 kubelet[1750]: E0508 23:56:07.626603 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:07.784556 systemd[1]: Created slice kubepods-besteffort-pod9b50feea_d4da_4a15_b913_3108b0a5c9cd.slice - libcontainer container kubepods-besteffort-pod9b50feea_d4da_4a15_b913_3108b0a5c9cd.slice. May 8 23:56:07.788269 containerd[1440]: time="2025-05-08T23:56:07.788223148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nt6hn,Uid:9b50feea-d4da-4a15-b913-3108b0a5c9cd,Namespace:calico-system,Attempt:0,}" May 8 23:56:07.958796 containerd[1440]: time="2025-05-08T23:56:07.958590931Z" level=error msg="Failed to destroy network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 23:56:07.959502 containerd[1440]: time="2025-05-08T23:56:07.959426501Z" level=error msg="encountered an error cleaning up failed sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 23:56:07.959648 containerd[1440]: time="2025-05-08T23:56:07.959594429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nt6hn,Uid:9b50feea-d4da-4a15-b913-3108b0a5c9cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 23:56:07.960255 kubelet[1750]: E0508 23:56:07.959887 1750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 23:56:07.960255 kubelet[1750]: E0508 23:56:07.959975 1750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nt6hn" May 8 23:56:07.960255 kubelet[1750]: E0508 23:56:07.959998 1750 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nt6hn" May 8 23:56:07.960403 kubelet[1750]: E0508 23:56:07.960037 1750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nt6hn_calico-system(9b50feea-d4da-4a15-b913-3108b0a5c9cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nt6hn_calico-system(9b50feea-d4da-4a15-b913-3108b0a5c9cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nt6hn" podUID="9b50feea-d4da-4a15-b913-3108b0a5c9cd" May 8 23:56:07.960370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4-shm.mount: Deactivated successfully. May 8 23:56:08.627021 kubelet[1750]: E0508 23:56:08.626982 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:08.802488 kubelet[1750]: I0508 23:56:08.801961 1750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" May 8 23:56:08.803767 containerd[1440]: time="2025-05-08T23:56:08.803349614Z" level=info msg="StopPodSandbox for \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\"" May 8 23:56:08.803767 containerd[1440]: time="2025-05-08T23:56:08.803537592Z" level=info msg="Ensure that sandbox 16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4 in task-service has been cleanup successfully" May 8 23:56:08.831475 containerd[1440]: time="2025-05-08T23:56:08.831424562Z" level=error msg="StopPodSandbox for \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\" failed" error="failed to destroy network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 23:56:08.831894 kubelet[1750]: E0508 23:56:08.831835 1750 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" May 8 23:56:08.831980 kubelet[1750]: E0508 23:56:08.831900 1750 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4"} May 8 23:56:08.831980 kubelet[1750]: E0508 23:56:08.831967 1750 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b50feea-d4da-4a15-b913-3108b0a5c9cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 23:56:08.832068 kubelet[1750]: E0508 23:56:08.831990 1750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b50feea-d4da-4a15-b913-3108b0a5c9cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nt6hn" podUID="9b50feea-d4da-4a15-b913-3108b0a5c9cd" May 8 23:56:09.593009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559187950.mount: Deactivated successfully. May 8 23:56:09.628015 kubelet[1750]: E0508 23:56:09.627970 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:09.880681 containerd[1440]: time="2025-05-08T23:56:09.880416575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:09.881384 containerd[1440]: time="2025-05-08T23:56:09.881038118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 8 23:56:09.881909 containerd[1440]: time="2025-05-08T23:56:09.881847337Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:09.883630 containerd[1440]: time="2025-05-08T23:56:09.883595576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:09.884136 containerd[1440]: time="2025-05-08T23:56:09.884109869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.087013693s" May 8 23:56:09.884187 containerd[1440]: time="2025-05-08T23:56:09.884143557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 23:56:09.896556 containerd[1440]: time="2025-05-08T23:56:09.896520369Z" level=info msg="CreateContainer within sandbox \"9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 23:56:09.932838 containerd[1440]: time="2025-05-08T23:56:09.932695540Z" level=info msg="CreateContainer within sandbox \"9b9b050ba9e8290a6f9a754022c2d6693a7ece36cfdc83bcd9a2974674c7e711\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1712e7377a4f56a64d59c012600c4c7a6a2fd212174e4331ef36d973c6bdb3e7\"" May 8 23:56:09.933275 containerd[1440]: time="2025-05-08T23:56:09.933211071Z" level=info msg="StartContainer for \"1712e7377a4f56a64d59c012600c4c7a6a2fd212174e4331ef36d973c6bdb3e7\"" May 8 23:56:09.961084 systemd[1]: Started cri-containerd-1712e7377a4f56a64d59c012600c4c7a6a2fd212174e4331ef36d973c6bdb3e7.scope - libcontainer container 1712e7377a4f56a64d59c012600c4c7a6a2fd212174e4331ef36d973c6bdb3e7. May 8 23:56:09.983514 containerd[1440]: time="2025-05-08T23:56:09.983467348Z" level=info msg="StartContainer for \"1712e7377a4f56a64d59c012600c4c7a6a2fd212174e4331ef36d973c6bdb3e7\" returns successfully" May 8 23:56:10.033193 kubelet[1750]: I0508 23:56:10.032484 1750 topology_manager.go:215] "Topology Admit Handler" podUID="fc3766b3-9992-4bee-99ca-85418057f370" podNamespace="default" podName="nginx-deployment-85f456d6dd-jdwdv" May 8 23:56:10.038064 systemd[1]: Created slice kubepods-besteffort-podfc3766b3_9992_4bee_99ca_85418057f370.slice - libcontainer container kubepods-besteffort-podfc3766b3_9992_4bee_99ca_85418057f370.slice. May 8 23:56:10.138942 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 23:56:10.139142 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 23:56:10.224835 kubelet[1750]: I0508 23:56:10.224789 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9mt\" (UniqueName: \"kubernetes.io/projected/fc3766b3-9992-4bee-99ca-85418057f370-kube-api-access-xg9mt\") pod \"nginx-deployment-85f456d6dd-jdwdv\" (UID: \"fc3766b3-9992-4bee-99ca-85418057f370\") " pod="default/nginx-deployment-85f456d6dd-jdwdv" May 8 23:56:10.341794 containerd[1440]: time="2025-05-08T23:56:10.341750874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jdwdv,Uid:fc3766b3-9992-4bee-99ca-85418057f370,Namespace:default,Attempt:0,}" May 8 23:56:10.496603 systemd-networkd[1383]: cali0f835bc353e: Link UP May 8 23:56:10.497066 systemd-networkd[1383]: cali0f835bc353e: Gained carrier May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.370 [INFO][2327] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.386 [INFO][2327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0 nginx-deployment-85f456d6dd- default fc3766b3-9992-4bee-99ca-85418057f370 899 0 2025-05-08 23:56:10 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.17 nginx-deployment-85f456d6dd-jdwdv eth0 default [] [] [kns.default ksa.default.default] cali0f835bc353e [] []}} ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.386 [INFO][2327] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.453 [INFO][2353] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" HandleID="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Workload="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.465 [INFO][2353] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" HandleID="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Workload="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003318e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.17", "pod":"nginx-deployment-85f456d6dd-jdwdv", "timestamp":"2025-05-08 23:56:10.45367096 +0000 UTC"}, Hostname:"10.0.0.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.465 [INFO][2353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.465 [INFO][2353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.465 [INFO][2353] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.17' May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.466 [INFO][2353] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.470 [INFO][2353] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.474 [INFO][2353] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.476 [INFO][2353] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.478 [INFO][2353] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.478 [INFO][2353] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.480 [INFO][2353] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3 May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.483 [INFO][2353] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.488 [INFO][2353] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.193/26] block=192.168.69.192/26 handle="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.488 [INFO][2353] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.193/26] handle="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" host="10.0.0.17" May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.488 [INFO][2353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 23:56:10.505909 containerd[1440]: 2025-05-08 23:56:10.488 [INFO][2353] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.193/26] IPv6=[] ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" HandleID="k8s-pod-network.417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Workload="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" May 8 23:56:10.506542 containerd[1440]: 2025-05-08 23:56:10.491 [INFO][2327] cni-plugin/k8s.go 386: Populated endpoint ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"fc3766b3-9992-4bee-99ca-85418057f370", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-jdwdv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0f835bc353e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:10.506542 containerd[1440]: 2025-05-08 23:56:10.491 [INFO][2327] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.193/32] ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" May 8 23:56:10.506542 containerd[1440]: 2025-05-08 23:56:10.491 [INFO][2327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f835bc353e ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" May 8 23:56:10.506542 containerd[1440]: 2025-05-08 23:56:10.496 [INFO][2327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" May 8 23:56:10.506542 containerd[1440]: 2025-05-08 23:56:10.497 [INFO][2327] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"fc3766b3-9992-4bee-99ca-85418057f370", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3", Pod:"nginx-deployment-85f456d6dd-jdwdv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0f835bc353e", MAC:"6e:e3:f5:de:a9:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:10.506542 containerd[1440]: 2025-05-08 23:56:10.503 [INFO][2327] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3" Namespace="default" Pod="nginx-deployment-85f456d6dd-jdwdv" WorkloadEndpoint="10.0.0.17-k8s-nginx--deployment--85f456d6dd--jdwdv-eth0" May 8 23:56:10.520908 containerd[1440]: time="2025-05-08T23:56:10.520480384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:10.520908 containerd[1440]: time="2025-05-08T23:56:10.520873764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:10.520908 containerd[1440]: time="2025-05-08T23:56:10.520886580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:10.521165 containerd[1440]: time="2025-05-08T23:56:10.520991262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:10.538090 systemd[1]: Started cri-containerd-417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3.scope - libcontainer container 417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3. May 8 23:56:10.548274 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:56:10.564065 containerd[1440]: time="2025-05-08T23:56:10.564000359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jdwdv,Uid:fc3766b3-9992-4bee-99ca-85418057f370,Namespace:default,Attempt:0,} returns sandbox id \"417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3\"" May 8 23:56:10.565678 containerd[1440]: time="2025-05-08T23:56:10.565573598Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 23:56:10.628860 kubelet[1750]: E0508 23:56:10.628818 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:10.808665 kubelet[1750]: E0508 23:56:10.807446 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:10.819841 kubelet[1750]: I0508 23:56:10.819764 1750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j9dpd" podStartSLOduration=3.707705657 podStartE2EDuration="12.819746763s" podCreationTimestamp="2025-05-08 23:55:58 +0000 UTC" firstStartedPulling="2025-05-08 23:56:00.77312058 +0000 UTC m=+2.720981366" lastFinishedPulling="2025-05-08 23:56:09.885161647 +0000 UTC m=+11.833022472" observedRunningTime="2025-05-08 23:56:10.819442216 +0000 UTC m=+12.767303042" watchObservedRunningTime="2025-05-08 23:56:10.819746763 +0000 UTC m=+12.767607589" May 8 23:56:11.618032 kernel: bpftool[2552]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 23:56:11.629788 kubelet[1750]: E0508 23:56:11.629423 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:11.790237 systemd-networkd[1383]: vxlan.calico: Link UP May 8 23:56:11.790245 systemd-networkd[1383]: vxlan.calico: Gained carrier May 8 23:56:11.811425 kubelet[1750]: I0508 23:56:11.811205 1750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 23:56:11.813436 kubelet[1750]: E0508 23:56:11.813079 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:12.112037 systemd-networkd[1383]: cali0f835bc353e: Gained IPv6LL May 8 23:56:12.395108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566185659.mount: Deactivated successfully. May 8 23:56:12.630270 kubelet[1750]: E0508 23:56:12.630193 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:13.199037 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL May 8 23:56:13.299691 containerd[1440]: time="2025-05-08T23:56:13.299321823Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 8 23:56:13.304162 containerd[1440]: time="2025-05-08T23:56:13.304125290Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.7384373s" May 8 23:56:13.304162 containerd[1440]: time="2025-05-08T23:56:13.304163003Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 8 23:56:13.306938 containerd[1440]: time="2025-05-08T23:56:13.306206708Z" level=info msg="CreateContainer within sandbox \"417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 8 23:56:13.320107 containerd[1440]: time="2025-05-08T23:56:13.320062449Z" level=info msg="CreateContainer within sandbox \"417455a575325b3e4600c396527399148422a76e504dd9470aaebfd9ea4c94e3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"df616bea0b3365a8640c4a24a00ceb14758222b9afaa721ee995d6848451b07a\"" May 8 23:56:13.320910 containerd[1440]: time="2025-05-08T23:56:13.320649190Z" level=info msg="StartContainer for \"df616bea0b3365a8640c4a24a00ceb14758222b9afaa721ee995d6848451b07a\"" May 8 23:56:13.325339 containerd[1440]: time="2025-05-08T23:56:13.325288225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:13.326400 containerd[1440]: time="2025-05-08T23:56:13.326370661Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:13.327411 containerd[1440]: time="2025-05-08T23:56:13.327383824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:13.410069 systemd[1]: Started cri-containerd-df616bea0b3365a8640c4a24a00ceb14758222b9afaa721ee995d6848451b07a.scope - libcontainer container df616bea0b3365a8640c4a24a00ceb14758222b9afaa721ee995d6848451b07a. May 8 23:56:13.471455 containerd[1440]: time="2025-05-08T23:56:13.471309836Z" level=info msg="StartContainer for \"df616bea0b3365a8640c4a24a00ceb14758222b9afaa721ee995d6848451b07a\" returns successfully" May 8 23:56:13.630738 kubelet[1750]: E0508 23:56:13.630682 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:13.823701 kubelet[1750]: I0508 23:56:13.823550 1750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-jdwdv" podStartSLOduration=1.08390484 podStartE2EDuration="3.823535506s" podCreationTimestamp="2025-05-08 23:56:10 +0000 UTC" firstStartedPulling="2025-05-08 23:56:10.565209843 +0000 UTC m=+12.513070669" lastFinishedPulling="2025-05-08 23:56:13.304840549 +0000 UTC m=+15.252701335" observedRunningTime="2025-05-08 23:56:13.823335438 +0000 UTC m=+15.771196264" watchObservedRunningTime="2025-05-08 23:56:13.823535506 +0000 UTC m=+15.771396332" May 8 23:56:14.631263 kubelet[1750]: E0508 23:56:14.631217 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:15.632199 kubelet[1750]: E0508 23:56:15.632140 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:16.633208 kubelet[1750]: E0508 23:56:16.633147 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:17.078055 kubelet[1750]: I0508 23:56:17.077614 1750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 23:56:17.079115 kubelet[1750]: E0508 23:56:17.078365 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:17.634043 kubelet[1750]: E0508 23:56:17.633975 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:17.698735 kubelet[1750]: I0508 23:56:17.698681 1750 topology_manager.go:215] "Topology Admit Handler" podUID="cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5" podNamespace="default" podName="nfs-server-provisioner-0" May 8 23:56:17.705428 systemd[1]: Created slice kubepods-besteffort-podcb5cf59f_0695_4b54_a4c3_9cdefeb24dc5.slice - libcontainer container kubepods-besteffort-podcb5cf59f_0695_4b54_a4c3_9cdefeb24dc5.slice. May 8 23:56:17.821604 kubelet[1750]: E0508 23:56:17.821577 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:17.864688 kubelet[1750]: I0508 23:56:17.864649 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5-data\") pod \"nfs-server-provisioner-0\" (UID: \"cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5\") " pod="default/nfs-server-provisioner-0" May 8 23:56:17.864688 kubelet[1750]: I0508 23:56:17.864688 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzl8q\" (UniqueName: \"kubernetes.io/projected/cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5-kube-api-access-vzl8q\") pod \"nfs-server-provisioner-0\" (UID: \"cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5\") " pod="default/nfs-server-provisioner-0" May 8 23:56:18.011125 containerd[1440]: time="2025-05-08T23:56:18.010818412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5,Namespace:default,Attempt:0,}" May 8 23:56:18.144009 systemd-networkd[1383]: cali60e51b789ff: Link UP May 8 23:56:18.144199 systemd-networkd[1383]: cali60e51b789ff: Gained carrier May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.063 [INFO][2772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.17-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5 1045 0 2025-05-08 23:56:17 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.17 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.063 [INFO][2772] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.090 [INFO][2786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" HandleID="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Workload="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.107 [INFO][2786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" HandleID="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Workload="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293af0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.17", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-08 23:56:18.090441426 +0000 UTC"}, Hostname:"10.0.0.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.107 [INFO][2786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.107 [INFO][2786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.107 [INFO][2786] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.17' May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.112 [INFO][2786] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.117 [INFO][2786] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.123 [INFO][2786] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.125 [INFO][2786] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.127 [INFO][2786] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.127 [INFO][2786] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.129 [INFO][2786] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264 May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.133 [INFO][2786] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.138 [INFO][2786] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.194/26] block=192.168.69.192/26 handle="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.138 [INFO][2786] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.194/26] handle="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" host="10.0.0.17" May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.138 [INFO][2786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 23:56:18.158712 containerd[1440]: 2025-05-08 23:56:18.138 [INFO][2786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.194/26] IPv6=[] ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" HandleID="k8s-pod-network.071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Workload="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" May 8 23:56:18.159364 containerd[1440]: 2025-05-08 23:56:18.141 [INFO][2772] cni-plugin/k8s.go 386: Populated endpoint ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:18.159364 containerd[1440]: 2025-05-08 23:56:18.141 [INFO][2772] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.194/32] ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" May 8 23:56:18.159364 containerd[1440]: 2025-05-08 23:56:18.141 [INFO][2772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" May 8 23:56:18.159364 containerd[1440]: 2025-05-08 23:56:18.143 [INFO][2772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" May 8 23:56:18.159501 containerd[1440]: 2025-05-08 23:56:18.144 [INFO][2772] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6e:9c:2a:e1:24:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:18.159501 containerd[1440]: 2025-05-08 23:56:18.153 [INFO][2772] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.17-k8s-nfs--server--provisioner--0-eth0" May 8 23:56:18.175139 containerd[1440]: time="2025-05-08T23:56:18.174981743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:18.175139 containerd[1440]: time="2025-05-08T23:56:18.175125490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:18.175393 containerd[1440]: time="2025-05-08T23:56:18.175150354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:18.175393 containerd[1440]: time="2025-05-08T23:56:18.175252048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:18.191124 systemd[1]: Started cri-containerd-071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264.scope - libcontainer container 071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264. May 8 23:56:18.201352 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:56:18.246825 containerd[1440]: time="2025-05-08T23:56:18.246784327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cb5cf59f-0695-4b54-a4c3-9cdefeb24dc5,Namespace:default,Attempt:0,} returns sandbox id \"071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264\"" May 8 23:56:18.248258 containerd[1440]: time="2025-05-08T23:56:18.248191738Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 8 23:56:18.619774 kubelet[1750]: E0508 23:56:18.619711 1750 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:18.635132 kubelet[1750]: E0508 23:56:18.635100 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:19.635414 kubelet[1750]: E0508 23:56:19.635367 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:19.919065 systemd-networkd[1383]: cali60e51b789ff: Gained IPv6LL May 8 23:56:20.635706 kubelet[1750]: E0508 23:56:20.635661 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:21.160248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430041882.mount: Deactivated successfully. May 8 23:56:21.636810 kubelet[1750]: E0508 23:56:21.636540 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:22.585212 containerd[1440]: time="2025-05-08T23:56:22.585154550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:22.587469 containerd[1440]: time="2025-05-08T23:56:22.587426676Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 8 23:56:22.589048 containerd[1440]: time="2025-05-08T23:56:22.589012499Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:22.591826 containerd[1440]: time="2025-05-08T23:56:22.591779909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:22.593021 containerd[1440]: time="2025-05-08T23:56:22.592982538Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.344753495s" May 8 23:56:22.593070 containerd[1440]: time="2025-05-08T23:56:22.593022821Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 8 23:56:22.595483 containerd[1440]: time="2025-05-08T23:56:22.595450160Z" level=info msg="CreateContainer within sandbox \"071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 8 23:56:22.616727 containerd[1440]: time="2025-05-08T23:56:22.616679518Z" level=info msg="CreateContainer within sandbox \"071104f05b8f22a96783337d4d26a705c6ed009161b0efee0702c28398a52264\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d699d7c7cd7e2d584cf1f802d4a149af3d814ee657c022d5924f591357592361\"" May 8 23:56:22.617455 containerd[1440]: time="2025-05-08T23:56:22.617429626Z" level=info msg="StartContainer for \"d699d7c7cd7e2d584cf1f802d4a149af3d814ee657c022d5924f591357592361\"" May 8 23:56:22.637996 kubelet[1750]: E0508 23:56:22.637951 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:22.645169 systemd[1]: Started cri-containerd-d699d7c7cd7e2d584cf1f802d4a149af3d814ee657c022d5924f591357592361.scope - libcontainer container d699d7c7cd7e2d584cf1f802d4a149af3d814ee657c022d5924f591357592361. May 8 23:56:22.672241 containerd[1440]: time="2025-05-08T23:56:22.672198013Z" level=info msg="StartContainer for \"d699d7c7cd7e2d584cf1f802d4a149af3d814ee657c022d5924f591357592361\" returns successfully" May 8 23:56:22.774666 containerd[1440]: time="2025-05-08T23:56:22.774620145Z" level=info msg="StopPodSandbox for \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\"" May 8 23:56:22.842658 kubelet[1750]: I0508 23:56:22.842185 1750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.496235978 podStartE2EDuration="5.842160446s" podCreationTimestamp="2025-05-08 23:56:17 +0000 UTC" firstStartedPulling="2025-05-08 23:56:18.247984073 +0000 UTC m=+20.195844899" lastFinishedPulling="2025-05-08 23:56:22.593908541 +0000 UTC m=+24.541769367" observedRunningTime="2025-05-08 23:56:22.842005792 +0000 UTC m=+24.789866578" watchObservedRunningTime="2025-05-08 23:56:22.842160446 +0000 UTC m=+24.790021272" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.899 [INFO][2949] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.899 [INFO][2949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" iface="eth0" netns="/var/run/netns/cni-7227eef2-0a1f-f302-9516-658e430bc016" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.899 [INFO][2949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" iface="eth0" netns="/var/run/netns/cni-7227eef2-0a1f-f302-9516-658e430bc016" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.900 [INFO][2949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" iface="eth0" netns="/var/run/netns/cni-7227eef2-0a1f-f302-9516-658e430bc016" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.900 [INFO][2949] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.900 [INFO][2949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.926 [INFO][2964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" HandleID="k8s-pod-network.16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" Workload="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.927 [INFO][2964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.927 [INFO][2964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.937 [WARNING][2964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" HandleID="k8s-pod-network.16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" Workload="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.937 [INFO][2964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" HandleID="k8s-pod-network.16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" Workload="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.939 [INFO][2964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 23:56:22.941659 containerd[1440]: 2025-05-08 23:56:22.940 [INFO][2949] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4" May 8 23:56:22.942062 containerd[1440]: time="2025-05-08T23:56:22.941832090Z" level=info msg="TearDown network for sandbox \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\" successfully" May 8 23:56:22.942062 containerd[1440]: time="2025-05-08T23:56:22.941860172Z" level=info msg="StopPodSandbox for \"16299bcd8696e2562c02103beef64ad13d2c81152c5a5aa4c395e75a6276b9a4\" returns successfully" May 8 23:56:22.943526 systemd[1]: run-netns-cni\x2d7227eef2\x2d0a1f\x2df302\x2d9516\x2d658e430bc016.mount: Deactivated successfully. May 8 23:56:22.944079 containerd[1440]: time="2025-05-08T23:56:22.943735822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nt6hn,Uid:9b50feea-d4da-4a15-b913-3108b0a5c9cd,Namespace:calico-system,Attempt:1,}" May 8 23:56:23.060363 systemd-networkd[1383]: cali29a222eff22: Link UP May 8 23:56:23.060512 systemd-networkd[1383]: cali29a222eff22: Gained carrier May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:22.982 [INFO][2991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.17-k8s-csi--node--driver--nt6hn-eth0 csi-node-driver- calico-system 9b50feea-d4da-4a15-b913-3108b0a5c9cd 1084 0 2025-05-08 23:55:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.17 csi-node-driver-nt6hn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali29a222eff22 [] []}} ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:22.982 [INFO][2991] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.008 [INFO][3006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" HandleID="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Workload="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.023 [INFO][3006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" HandleID="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Workload="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003040e0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.17", "pod":"csi-node-driver-nt6hn", "timestamp":"2025-05-08 23:56:23.008944912 +0000 UTC"}, Hostname:"10.0.0.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.023 [INFO][3006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.023 [INFO][3006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.023 [INFO][3006] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.17' May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.024 [INFO][3006] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.028 [INFO][3006] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.033 [INFO][3006] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.035 [INFO][3006] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.037 [INFO][3006] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.037 [INFO][3006] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.039 [INFO][3006] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.043 [INFO][3006] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.051 [INFO][3006] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.195/26] block=192.168.69.192/26 handle="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.051 [INFO][3006] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.195/26] handle="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" host="10.0.0.17" May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.051 [INFO][3006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 23:56:23.074976 containerd[1440]: 2025-05-08 23:56:23.051 [INFO][3006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.195/26] IPv6=[] ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" HandleID="k8s-pod-network.7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Workload="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:23.075593 containerd[1440]: 2025-05-08 23:56:23.053 [INFO][2991] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-csi--node--driver--nt6hn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b50feea-d4da-4a15-b913-3108b0a5c9cd", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"", Pod:"csi-node-driver-nt6hn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali29a222eff22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:23.075593 containerd[1440]: 2025-05-08 23:56:23.053 [INFO][2991] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.195/32] ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:23.075593 containerd[1440]: 2025-05-08 23:56:23.053 [INFO][2991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29a222eff22 ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:23.075593 containerd[1440]: 2025-05-08 23:56:23.059 [INFO][2991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:23.075593 containerd[1440]: 2025-05-08 23:56:23.060 [INFO][2991] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-csi--node--driver--nt6hn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b50feea-d4da-4a15-b913-3108b0a5c9cd", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae", Pod:"csi-node-driver-nt6hn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali29a222eff22", MAC:"a2:4e:21:11:eb:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:23.075593 containerd[1440]: 2025-05-08 23:56:23.072 [INFO][2991] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae" Namespace="calico-system" Pod="csi-node-driver-nt6hn" WorkloadEndpoint="10.0.0.17-k8s-csi--node--driver--nt6hn-eth0" May 8 23:56:23.091810 containerd[1440]: time="2025-05-08T23:56:23.091577165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:23.091810 containerd[1440]: time="2025-05-08T23:56:23.091635090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:23.091810 containerd[1440]: time="2025-05-08T23:56:23.091654132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:23.091810 containerd[1440]: time="2025-05-08T23:56:23.091752940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:23.114093 systemd[1]: Started cri-containerd-7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae.scope - libcontainer container 7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae. May 8 23:56:23.122822 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:56:23.133593 containerd[1440]: time="2025-05-08T23:56:23.133549067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nt6hn,Uid:9b50feea-d4da-4a15-b913-3108b0a5c9cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae\"" May 8 23:56:23.135538 containerd[1440]: time="2025-05-08T23:56:23.135510635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 23:56:23.639118 kubelet[1750]: E0508 23:56:23.639068 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:23.956658 containerd[1440]: time="2025-05-08T23:56:23.956536950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:23.957084 containerd[1440]: time="2025-05-08T23:56:23.957048354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 8 23:56:23.957886 containerd[1440]: time="2025-05-08T23:56:23.957848022Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:23.959674 containerd[1440]: time="2025-05-08T23:56:23.959634374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:23.960592 containerd[1440]: time="2025-05-08T23:56:23.960554293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 825.011615ms" May 8 23:56:23.960632 containerd[1440]: time="2025-05-08T23:56:23.960590656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 23:56:23.962772 containerd[1440]: time="2025-05-08T23:56:23.962741880Z" level=info msg="CreateContainer within sandbox \"7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 23:56:23.977623 containerd[1440]: time="2025-05-08T23:56:23.977575586Z" level=info msg="CreateContainer within sandbox \"7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"832d279229775c38d0a9e5ce773e23564e72dc8057cba9b2e98fb0b606617056\"" May 8 23:56:23.978487 containerd[1440]: time="2025-05-08T23:56:23.978452741Z" level=info msg="StartContainer for \"832d279229775c38d0a9e5ce773e23564e72dc8057cba9b2e98fb0b606617056\"" May 8 23:56:24.020127 systemd[1]: Started cri-containerd-832d279229775c38d0a9e5ce773e23564e72dc8057cba9b2e98fb0b606617056.scope - libcontainer container 832d279229775c38d0a9e5ce773e23564e72dc8057cba9b2e98fb0b606617056. May 8 23:56:24.064492 containerd[1440]: time="2025-05-08T23:56:24.062352615Z" level=info msg="StartContainer for \"832d279229775c38d0a9e5ce773e23564e72dc8057cba9b2e98fb0b606617056\" returns successfully" May 8 23:56:24.072774 containerd[1440]: time="2025-05-08T23:56:24.068023392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 23:56:24.207111 systemd-networkd[1383]: cali29a222eff22: Gained IPv6LL May 8 23:56:24.639797 kubelet[1750]: E0508 23:56:24.639625 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:25.007026 containerd[1440]: time="2025-05-08T23:56:25.006532807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:25.007360 containerd[1440]: time="2025-05-08T23:56:25.007038926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 8 23:56:25.008004 containerd[1440]: time="2025-05-08T23:56:25.007967637Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:25.010028 containerd[1440]: time="2025-05-08T23:56:25.009970390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:25.010853 containerd[1440]: time="2025-05-08T23:56:25.010550554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 942.491599ms" May 8 23:56:25.010853 containerd[1440]: time="2025-05-08T23:56:25.010585437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 23:56:25.013091 containerd[1440]: time="2025-05-08T23:56:25.013053425Z" level=info msg="CreateContainer within sandbox \"7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 23:56:25.028866 containerd[1440]: time="2025-05-08T23:56:25.028814468Z" level=info msg="CreateContainer within sandbox \"7e91a507ff78a67b7cef4460197bb4f980eecbbbeea82045a963090cc18dbbae\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"568ded13d6784f3920efe49207d8c54afadc3d9ab193fa83c4309dc69460f993\"" May 8 23:56:25.029410 containerd[1440]: time="2025-05-08T23:56:25.029371030Z" level=info msg="StartContainer for \"568ded13d6784f3920efe49207d8c54afadc3d9ab193fa83c4309dc69460f993\"" May 8 23:56:25.067116 systemd[1]: Started cri-containerd-568ded13d6784f3920efe49207d8c54afadc3d9ab193fa83c4309dc69460f993.scope - libcontainer container 568ded13d6784f3920efe49207d8c54afadc3d9ab193fa83c4309dc69460f993. May 8 23:56:25.103573 containerd[1440]: time="2025-05-08T23:56:25.103518128Z" level=info msg="StartContainer for \"568ded13d6784f3920efe49207d8c54afadc3d9ab193fa83c4309dc69460f993\" returns successfully" May 8 23:56:25.640183 kubelet[1750]: E0508 23:56:25.640136 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:25.896854 kubelet[1750]: I0508 23:56:25.896175 1750 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 23:56:25.899964 kubelet[1750]: I0508 23:56:25.899939 1750 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 23:56:26.641207 kubelet[1750]: E0508 23:56:26.641159 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:27.642229 kubelet[1750]: E0508 23:56:27.642185 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:28.642977 kubelet[1750]: E0508 23:56:28.642886 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:29.643929 kubelet[1750]: E0508 23:56:29.643868 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:30.644386 kubelet[1750]: E0508 23:56:30.644341 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:31.645264 kubelet[1750]: E0508 23:56:31.645217 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:32.646088 kubelet[1750]: E0508 23:56:32.646042 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:33.646591 kubelet[1750]: E0508 23:56:33.646553 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:34.131997 update_engine[1421]: I20250508 23:56:34.131804 1421 update_attempter.cc:509] Updating boot flags... May 8 23:56:34.164689 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3175) May 8 23:56:34.551463 kubelet[1750]: I0508 23:56:34.551412 1750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nt6hn" podStartSLOduration=34.674808641 podStartE2EDuration="36.551392883s" podCreationTimestamp="2025-05-08 23:55:58 +0000 UTC" firstStartedPulling="2025-05-08 23:56:23.134947747 +0000 UTC m=+25.082808573" lastFinishedPulling="2025-05-08 23:56:25.011531989 +0000 UTC m=+26.959392815" observedRunningTime="2025-05-08 23:56:25.871366917 +0000 UTC m=+27.819227743" watchObservedRunningTime="2025-05-08 23:56:34.551392883 +0000 UTC m=+36.499253709" May 8 23:56:34.551688 kubelet[1750]: I0508 23:56:34.551590 1750 topology_manager.go:215] "Topology Admit Handler" podUID="4d78414a-207f-48e2-9961-4fd82a98de89" podNamespace="default" podName="test-pod-1" May 8 23:56:34.563952 systemd[1]: Created slice kubepods-besteffort-pod4d78414a_207f_48e2_9961_4fd82a98de89.slice - libcontainer container kubepods-besteffort-pod4d78414a_207f_48e2_9961_4fd82a98de89.slice. May 8 23:56:34.647543 kubelet[1750]: E0508 23:56:34.647500 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:34.654841 kubelet[1750]: I0508 23:56:34.654749 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41cc8048-61fc-4730-9254-d124ee6300e5\" (UniqueName: \"kubernetes.io/nfs/4d78414a-207f-48e2-9961-4fd82a98de89-pvc-41cc8048-61fc-4730-9254-d124ee6300e5\") pod \"test-pod-1\" (UID: \"4d78414a-207f-48e2-9961-4fd82a98de89\") " pod="default/test-pod-1" May 8 23:56:34.654841 kubelet[1750]: I0508 23:56:34.654793 1750 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcp5f\" (UniqueName: \"kubernetes.io/projected/4d78414a-207f-48e2-9961-4fd82a98de89-kube-api-access-xcp5f\") pod \"test-pod-1\" (UID: \"4d78414a-207f-48e2-9961-4fd82a98de89\") " pod="default/test-pod-1" May 8 23:56:34.842968 kernel: FS-Cache: Loaded May 8 23:56:34.885292 kernel: RPC: Registered named UNIX socket transport module. May 8 23:56:34.885402 kernel: RPC: Registered udp transport module. May 8 23:56:34.885423 kernel: RPC: Registered tcp transport module. May 8 23:56:34.886211 kernel: RPC: Registered tcp-with-tls transport module. May 8 23:56:34.886258 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 8 23:56:35.110109 kernel: NFS: Registering the id_resolver key type May 8 23:56:35.110214 kernel: Key type id_resolver registered May 8 23:56:35.110235 kernel: Key type id_legacy registered May 8 23:56:35.166274 nfsidmap[3195]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 23:56:35.170362 nfsidmap[3198]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 23:56:35.467721 containerd[1440]: time="2025-05-08T23:56:35.467680375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4d78414a-207f-48e2-9961-4fd82a98de89,Namespace:default,Attempt:0,}" May 8 23:56:35.606517 systemd-networkd[1383]: cali5ec59c6bf6e: Link UP May 8 23:56:35.606729 systemd-networkd[1383]: cali5ec59c6bf6e: Gained carrier May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.511 [INFO][3201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.17-k8s-test--pod--1-eth0 default 4d78414a-207f-48e2-9961-4fd82a98de89 1134 0 2025-05-08 23:56:18 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.17 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.511 [INFO][3201] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-eth0" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.538 [INFO][3216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" HandleID="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Workload="10.0.0.17-k8s-test--pod--1-eth0" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.555 [INFO][3216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" HandleID="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Workload="10.0.0.17-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d330), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.17", "pod":"test-pod-1", "timestamp":"2025-05-08 23:56:35.538015825 +0000 UTC"}, Hostname:"10.0.0.17", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.555 [INFO][3216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.555 [INFO][3216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.555 [INFO][3216] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.17' May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.557 [INFO][3216] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.564 [INFO][3216] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.570 [INFO][3216] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.571 [INFO][3216] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.575 [INFO][3216] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.575 [INFO][3216] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.579 [INFO][3216] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.588 [INFO][3216] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.602 [INFO][3216] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.196/26] block=192.168.69.192/26 handle="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.602 [INFO][3216] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.196/26] handle="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" host="10.0.0.17" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.602 [INFO][3216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.602 [INFO][3216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.196/26] IPv6=[] ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" HandleID="k8s-pod-network.66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Workload="10.0.0.17-k8s-test--pod--1-eth0" May 8 23:56:35.619805 containerd[1440]: 2025-05-08 23:56:35.604 [INFO][3201] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4d78414a-207f-48e2-9961-4fd82a98de89", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:35.620828 containerd[1440]: 2025-05-08 23:56:35.604 [INFO][3201] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.196/32] ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-eth0" May 8 23:56:35.620828 containerd[1440]: 2025-05-08 23:56:35.604 [INFO][3201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-eth0" May 8 23:56:35.620828 containerd[1440]: 2025-05-08 23:56:35.606 [INFO][3201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-eth0" May 8 23:56:35.620828 containerd[1440]: 2025-05-08 23:56:35.606 [INFO][3201] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.17-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4d78414a-207f-48e2-9961-4fd82a98de89", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.17", ContainerID:"66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"f2:72:79:a3:03:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 23:56:35.620828 containerd[1440]: 2025-05-08 23:56:35.617 [INFO][3201] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.17-k8s-test--pod--1-eth0" May 8 23:56:35.639893 containerd[1440]: time="2025-05-08T23:56:35.639650646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:35.639893 containerd[1440]: time="2025-05-08T23:56:35.639751211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:35.639893 containerd[1440]: time="2025-05-08T23:56:35.639778212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:35.640160 containerd[1440]: time="2025-05-08T23:56:35.639874697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:35.647660 kubelet[1750]: E0508 23:56:35.647615 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:35.661140 systemd[1]: Started cri-containerd-66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f.scope - libcontainer container 66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f. May 8 23:56:35.671080 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:56:35.689068 containerd[1440]: time="2025-05-08T23:56:35.688894866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4d78414a-207f-48e2-9961-4fd82a98de89,Namespace:default,Attempt:0,} returns sandbox id \"66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f\"" May 8 23:56:35.691206 containerd[1440]: time="2025-05-08T23:56:35.691176969Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 23:56:35.909636 containerd[1440]: time="2025-05-08T23:56:35.909528211Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:35.910635 containerd[1440]: time="2025-05-08T23:56:35.910132118Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 8 23:56:35.913458 containerd[1440]: time="2025-05-08T23:56:35.913412346Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 222.198015ms" May 8 23:56:35.913458 containerd[1440]: time="2025-05-08T23:56:35.913449427Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 8 23:56:35.916033 containerd[1440]: time="2025-05-08T23:56:35.915289310Z" level=info msg="CreateContainer within sandbox \"66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 8 23:56:35.928021 containerd[1440]: time="2025-05-08T23:56:35.927976842Z" level=info msg="CreateContainer within sandbox \"66f98019445556279b728bec5bcc3f73770e251fd9bd4f9510c1cea0b9c0044f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"48fa2e8213b287996cf5fcc665017bfaa900b32353b6c3106992421bed93a1d1\"" May 8 23:56:35.928682 containerd[1440]: time="2025-05-08T23:56:35.928622711Z" level=info msg="StartContainer for \"48fa2e8213b287996cf5fcc665017bfaa900b32353b6c3106992421bed93a1d1\"" May 8 23:56:35.959104 systemd[1]: Started cri-containerd-48fa2e8213b287996cf5fcc665017bfaa900b32353b6c3106992421bed93a1d1.scope - libcontainer container 48fa2e8213b287996cf5fcc665017bfaa900b32353b6c3106992421bed93a1d1. May 8 23:56:35.980094 containerd[1440]: time="2025-05-08T23:56:35.980055950Z" level=info msg="StartContainer for \"48fa2e8213b287996cf5fcc665017bfaa900b32353b6c3106992421bed93a1d1\" returns successfully" May 8 23:56:36.648250 kubelet[1750]: E0508 23:56:36.648203 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:36.884013 kubelet[1750]: I0508 23:56:36.883642 1750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.660120582 podStartE2EDuration="18.883623096s" podCreationTimestamp="2025-05-08 23:56:18 +0000 UTC" firstStartedPulling="2025-05-08 23:56:35.690582982 +0000 UTC m=+37.638443808" lastFinishedPulling="2025-05-08 23:56:35.914085496 +0000 UTC m=+37.861946322" observedRunningTime="2025-05-08 23:56:36.883506731 +0000 UTC m=+38.831367557" watchObservedRunningTime="2025-05-08 23:56:36.883623096 +0000 UTC m=+38.831483962" May 8 23:56:37.199173 systemd-networkd[1383]: cali5ec59c6bf6e: Gained IPv6LL May 8 23:56:37.649152 kubelet[1750]: E0508 23:56:37.649003 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:38.619972 kubelet[1750]: E0508 23:56:38.619875 1750 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:38.650148 kubelet[1750]: E0508 23:56:38.650101 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:39.650466 kubelet[1750]: E0508 23:56:39.650418 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 23:56:40.651392 kubelet[1750]: E0508 23:56:40.651326 1750 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"