May 8 00:22:26.944329 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:22:26.944350 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:22:26.944360 kernel: KASLR enabled May 8 00:22:26.944366 kernel: efi: EFI v2.7 by EDK II May 8 00:22:26.944372 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:22:26.944378 kernel: random: crng init done May 8 00:22:26.944385 kernel: ACPI: Early table checksum verification disabled May 8 00:22:26.944391 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:22:26.944397 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:22:26.944405 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944412 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944418 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944424 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944430 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944437 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944445 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944451 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944458 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:22:26.944465 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:22:26.944471 kernel: NUMA: Failed to initialise from firmware May 8 00:22:26.944477 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:22:26.944483 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 00:22:26.944490 kernel: Zone ranges: May 8 00:22:26.944496 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:22:26.944502 kernel: DMA32 empty May 8 00:22:26.944510 kernel: Normal empty May 8 00:22:26.944517 kernel: Movable zone start for each node May 8 00:22:26.944523 kernel: Early memory node ranges May 8 00:22:26.944529 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:22:26.944536 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:22:26.944542 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:22:26.944548 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:22:26.944555 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:22:26.944561 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:22:26.944567 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:22:26.944574 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:22:26.944580 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:22:26.944587 kernel: psci: probing for conduit method from ACPI. May 8 00:22:26.944594 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:22:26.944600 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:22:26.944609 kernel: psci: Trusted OS migration not required May 8 00:22:26.944615 kernel: psci: SMC Calling Convention v1.1 May 8 00:22:26.944622 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:22:26.944630 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:22:26.944637 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:22:26.944643 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:22:26.944650 kernel: Detected PIPT I-cache on CPU0 May 8 00:22:26.944657 kernel: CPU features: detected: GIC system register CPU interface May 8 00:22:26.944663 kernel: CPU features: detected: Hardware dirty bit management May 8 00:22:26.944670 kernel: CPU features: detected: Spectre-v4 May 8 00:22:26.944677 kernel: CPU features: detected: Spectre-BHB May 8 00:22:26.944695 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:22:26.944702 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:22:26.944710 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:22:26.944717 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:22:26.944724 kernel: alternatives: applying boot alternatives May 8 00:22:26.944731 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:22:26.944739 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:22:26.944745 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:22:26.944752 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:22:26.944759 kernel: Fallback order for Node 0: 0 May 8 00:22:26.944765 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:22:26.944772 kernel: Policy zone: DMA May 8 00:22:26.944779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:22:26.944786 kernel: software IO TLB: area num 4. May 8 00:22:26.944793 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:22:26.944800 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) May 8 00:22:26.944807 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:22:26.944814 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:22:26.944821 kernel: rcu: RCU event tracing is enabled. May 8 00:22:26.944828 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:22:26.944834 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:22:26.944841 kernel: Tracing variant of Tasks RCU enabled. May 8 00:22:26.944848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:22:26.944855 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:22:26.944861 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:22:26.944870 kernel: GICv3: 256 SPIs implemented May 8 00:22:26.944876 kernel: GICv3: 0 Extended SPIs implemented May 8 00:22:26.944883 kernel: Root IRQ handler: gic_handle_irq May 8 00:22:26.944889 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:22:26.944896 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:22:26.944902 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:22:26.944909 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:22:26.944917 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:22:26.944923 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:22:26.944930 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:22:26.944937 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:22:26.944945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:22:26.944951 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:22:26.944985 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:22:26.944992 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:22:26.944999 kernel: arm-pv: using stolen time PV May 8 00:22:26.945006 kernel: Console: colour dummy device 80x25 May 8 00:22:26.945013 kernel: ACPI: Core revision 20230628 May 8 00:22:26.945020 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:22:26.945027 kernel: pid_max: default: 32768 minimum: 301 May 8 00:22:26.945034 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:22:26.945043 kernel: landlock: Up and running. May 8 00:22:26.945049 kernel: SELinux: Initializing. May 8 00:22:26.945056 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:22:26.945063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:22:26.945070 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:22:26.945077 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:22:26.945084 kernel: rcu: Hierarchical SRCU implementation. May 8 00:22:26.945091 kernel: rcu: Max phase no-delay instances is 400. May 8 00:22:26.945097 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:22:26.945106 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:22:26.945112 kernel: Remapping and enabling EFI services. May 8 00:22:26.945119 kernel: smp: Bringing up secondary CPUs ... May 8 00:22:26.945126 kernel: Detected PIPT I-cache on CPU1 May 8 00:22:26.945133 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:22:26.945140 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:22:26.945146 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:22:26.945153 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:22:26.945160 kernel: Detected PIPT I-cache on CPU2 May 8 00:22:26.945167 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:22:26.945176 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:22:26.945183 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:22:26.945194 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:22:26.945203 kernel: Detected PIPT I-cache on CPU3 May 8 00:22:26.945210 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:22:26.945217 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:22:26.945225 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:22:26.945232 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:22:26.945239 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:22:26.945248 kernel: SMP: Total of 4 processors activated. May 8 00:22:26.945255 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:22:26.945263 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:22:26.945270 kernel: CPU features: detected: Common not Private translations May 8 00:22:26.945277 kernel: CPU features: detected: CRC32 instructions May 8 00:22:26.945285 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:22:26.945292 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:22:26.945300 kernel: CPU features: detected: LSE atomic instructions May 8 00:22:26.945330 kernel: CPU features: detected: Privileged Access Never May 8 00:22:26.945354 kernel: CPU features: detected: RAS Extension Support May 8 00:22:26.945361 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:22:26.945369 kernel: CPU: All CPU(s) started at EL1 May 8 00:22:26.945376 kernel: alternatives: applying system-wide alternatives May 8 00:22:26.945383 kernel: devtmpfs: initialized May 8 00:22:26.945390 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:22:26.945397 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:22:26.945405 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:22:26.945414 kernel: SMBIOS 3.0.0 present. May 8 00:22:26.945421 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:22:26.945429 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:22:26.945436 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:22:26.945443 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:22:26.945450 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:22:26.945458 kernel: audit: initializing netlink subsys (disabled) May 8 00:22:26.945465 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 May 8 00:22:26.945472 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:22:26.945480 kernel: cpuidle: using governor menu May 8 00:22:26.945487 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:22:26.945495 kernel: ASID allocator initialised with 32768 entries May 8 00:22:26.945502 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:22:26.945509 kernel: Serial: AMBA PL011 UART driver May 8 00:22:26.945516 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:22:26.945523 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:22:26.945531 kernel: Modules: 509024 pages in range for PLT usage May 8 00:22:26.945538 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:22:26.945546 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:22:26.945553 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:22:26.945561 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:22:26.945568 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:22:26.945575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:22:26.945582 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:22:26.945589 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:22:26.945596 kernel: ACPI: Added _OSI(Module Device) May 8 00:22:26.945603 kernel: ACPI: Added _OSI(Processor Device) May 8 00:22:26.945612 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:22:26.945619 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:22:26.945626 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:22:26.945633 kernel: ACPI: Interpreter enabled May 8 00:22:26.945640 kernel: ACPI: Using GIC for interrupt routing May 8 00:22:26.945648 kernel: ACPI: MCFG table detected, 1 entries May 8 00:22:26.945655 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:22:26.945662 kernel: printk: console [ttyAMA0] enabled May 8 00:22:26.945669 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:22:26.945816 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:22:26.945888 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:22:26.945953 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:22:26.946046 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:22:26.946108 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:22:26.946118 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:22:26.946125 kernel: PCI host bridge to bus 0000:00 May 8 00:22:26.946198 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:22:26.946256 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:22:26.946313 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:22:26.946371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:22:26.946449 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:22:26.946524 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:22:26.946593 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:22:26.946665 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:22:26.946743 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:22:26.946811 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:22:26.946876 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:22:26.946944 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:22:26.947032 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:22:26.947094 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:22:26.947151 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:22:26.947161 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:22:26.947169 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:22:26.947176 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:22:26.947184 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:22:26.947191 kernel: iommu: Default domain type: Translated May 8 00:22:26.947199 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:22:26.947206 kernel: efivars: Registered efivars operations May 8 00:22:26.947215 kernel: vgaarb: loaded May 8 00:22:26.947222 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:22:26.947229 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:22:26.947237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:22:26.947244 kernel: pnp: PnP ACPI init May 8 00:22:26.947319 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:22:26.947330 kernel: pnp: PnP ACPI: found 1 devices May 8 00:22:26.947337 kernel: NET: Registered PF_INET protocol family May 8 00:22:26.947347 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:22:26.947354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:22:26.947362 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:22:26.947370 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:22:26.947377 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:22:26.947385 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:22:26.947392 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:22:26.947400 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:22:26.947407 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:22:26.947416 kernel: PCI: CLS 0 bytes, default 64 May 8 00:22:26.947423 kernel: kvm [1]: HYP mode not available May 8 00:22:26.947430 kernel: Initialise system trusted keyrings May 8 00:22:26.947438 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:22:26.947445 kernel: Key type asymmetric registered May 8 00:22:26.947452 kernel: Asymmetric key parser 'x509' registered May 8 00:22:26.947460 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:22:26.947467 kernel: io scheduler mq-deadline registered May 8 00:22:26.947474 kernel: io scheduler kyber registered May 8 00:22:26.947483 kernel: io scheduler bfq registered May 8 00:22:26.947491 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:22:26.947499 kernel: ACPI: button: Power Button [PWRB] May 8 00:22:26.947506 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:22:26.947573 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:22:26.947583 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:22:26.947591 kernel: thunder_xcv, ver 1.0 May 8 00:22:26.947598 kernel: thunder_bgx, ver 1.0 May 8 00:22:26.947605 kernel: nicpf, ver 1.0 May 8 00:22:26.947614 kernel: nicvf, ver 1.0 May 8 00:22:26.947728 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:22:26.947791 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:22:26 UTC (1746663746) May 8 00:22:26.947801 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:22:26.947809 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:22:26.947816 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:22:26.947823 kernel: watchdog: Hard watchdog permanently disabled May 8 00:22:26.947831 kernel: NET: Registered PF_INET6 protocol family May 8 00:22:26.947841 kernel: Segment Routing with IPv6 May 8 00:22:26.947848 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:22:26.947855 kernel: NET: Registered PF_PACKET protocol family May 8 00:22:26.947863 kernel: Key type dns_resolver registered May 8 00:22:26.947870 kernel: registered taskstats version 1 May 8 00:22:26.947877 kernel: Loading compiled-in X.509 certificates May 8 00:22:26.947885 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:22:26.947892 kernel: Key type .fscrypt registered May 8 00:22:26.947900 kernel: Key type fscrypt-provisioning registered May 8 00:22:26.947908 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:22:26.947916 kernel: ima: Allocated hash algorithm: sha1 May 8 00:22:26.947923 kernel: ima: No architecture policies found May 8 00:22:26.947930 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:22:26.947938 kernel: clk: Disabling unused clocks May 8 00:22:26.947945 kernel: Freeing unused kernel memory: 39424K May 8 00:22:26.947952 kernel: Run /init as init process May 8 00:22:26.947993 kernel: with arguments: May 8 00:22:26.948001 kernel: /init May 8 00:22:26.948011 kernel: with environment: May 8 00:22:26.948018 kernel: HOME=/ May 8 00:22:26.948025 kernel: TERM=linux May 8 00:22:26.948032 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:22:26.948042 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:22:26.948051 systemd[1]: Detected virtualization kvm. May 8 00:22:26.948059 systemd[1]: Detected architecture arm64. May 8 00:22:26.948069 systemd[1]: Running in initrd. May 8 00:22:26.948076 systemd[1]: No hostname configured, using default hostname. May 8 00:22:26.948084 systemd[1]: Hostname set to . May 8 00:22:26.948092 systemd[1]: Initializing machine ID from VM UUID. May 8 00:22:26.948100 systemd[1]: Queued start job for default target initrd.target. May 8 00:22:26.948108 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:22:26.948116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:22:26.948124 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:22:26.948133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:22:26.948141 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:22:26.948149 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:22:26.948159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:22:26.948167 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:22:26.948174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:22:26.948183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:22:26.948192 systemd[1]: Reached target paths.target - Path Units. May 8 00:22:26.948200 systemd[1]: Reached target slices.target - Slice Units. May 8 00:22:26.948208 systemd[1]: Reached target swap.target - Swaps. May 8 00:22:26.948216 systemd[1]: Reached target timers.target - Timer Units. May 8 00:22:26.948224 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:22:26.948232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:22:26.948240 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:22:26.948248 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:22:26.948256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:22:26.948266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:22:26.948273 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:22:26.948281 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:22:26.948289 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:22:26.948297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:22:26.948305 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:22:26.948312 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:22:26.948320 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:22:26.948330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:22:26.948338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:22:26.948346 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:22:26.948354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:22:26.948362 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:22:26.948370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:22:26.948398 systemd-journald[237]: Collecting audit messages is disabled. May 8 00:22:26.948417 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:22:26.948425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:22:26.948436 systemd-journald[237]: Journal started May 8 00:22:26.948455 systemd-journald[237]: Runtime Journal (/run/log/journal/01c7e033ec18460aabcdfc75684f1394) is 5.9M, max 47.3M, 41.4M free. May 8 00:22:26.940769 systemd-modules-load[238]: Inserted module 'overlay' May 8 00:22:26.963296 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:22:26.963320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:22:26.963340 kernel: Bridge firewalling registered May 8 00:22:26.963350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:22:26.962147 systemd-modules-load[238]: Inserted module 'br_netfilter' May 8 00:22:26.966057 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:22:26.967829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:22:26.972505 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:22:26.974659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:22:26.977487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:22:26.982280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:22:26.985610 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:22:26.986807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:22:26.988089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:22:26.993946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:22:27.000763 dracut-cmdline[273]: dracut-dracut-053 May 8 00:22:27.003293 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:22:27.022760 systemd-resolved[279]: Positive Trust Anchors: May 8 00:22:27.022779 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:22:27.022812 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:22:27.027713 systemd-resolved[279]: Defaulting to hostname 'linux'. May 8 00:22:27.028894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:22:27.035187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:22:27.078976 kernel: SCSI subsystem initialized May 8 00:22:27.083995 kernel: Loading iSCSI transport class v2.0-870. May 8 00:22:27.094018 kernel: iscsi: registered transport (tcp) May 8 00:22:27.108982 kernel: iscsi: registered transport (qla4xxx) May 8 00:22:27.109011 kernel: QLogic iSCSI HBA Driver May 8 00:22:27.150029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:22:27.163111 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:22:27.179198 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:22:27.179253 kernel: device-mapper: uevent: version 1.0.3 May 8 00:22:27.181973 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:22:27.228993 kernel: raid6: neonx8 gen() 15793 MB/s May 8 00:22:27.245986 kernel: raid6: neonx4 gen() 15644 MB/s May 8 00:22:27.262974 kernel: raid6: neonx2 gen() 13277 MB/s May 8 00:22:27.279976 kernel: raid6: neonx1 gen() 10492 MB/s May 8 00:22:27.296973 kernel: raid6: int64x8 gen() 6955 MB/s May 8 00:22:27.313973 kernel: raid6: int64x4 gen() 7330 MB/s May 8 00:22:27.330978 kernel: raid6: int64x2 gen() 6127 MB/s May 8 00:22:27.348117 kernel: raid6: int64x1 gen() 5044 MB/s May 8 00:22:27.348155 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s May 8 00:22:27.366139 kernel: raid6: .... xor() 11843 MB/s, rmw enabled May 8 00:22:27.366176 kernel: raid6: using neon recovery algorithm May 8 00:22:27.374381 kernel: xor: measuring software checksum speed May 8 00:22:27.374413 kernel: 8regs : 19807 MB/sec May 8 00:22:27.374423 kernel: 32regs : 18890 MB/sec May 8 00:22:27.375001 kernel: arm64_neon : 26910 MB/sec May 8 00:22:27.375017 kernel: xor: using function: arm64_neon (26910 MB/sec) May 8 00:22:27.436988 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:22:27.450550 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:22:27.463126 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:22:27.476369 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 8 00:22:27.479781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:22:27.487148 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:22:27.499254 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation May 8 00:22:27.528846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:22:27.544146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:22:27.583063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:22:27.593388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:22:27.606985 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:22:27.608415 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:22:27.612442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:22:27.613634 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:22:27.620984 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:22:27.637458 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:22:27.637666 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:22:27.637686 kernel: GPT:9289727 != 19775487 May 8 00:22:27.637697 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:22:27.637747 kernel: GPT:9289727 != 19775487 May 8 00:22:27.637766 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:22:27.637776 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:22:27.624202 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:22:27.633135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:22:27.633248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:22:27.639789 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:22:27.640949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:22:27.641123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:22:27.643419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:22:27.658243 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (516) May 8 00:22:27.657200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:22:27.661569 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) May 8 00:22:27.659755 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:22:27.670015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:22:27.679599 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:22:27.687002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:22:27.691000 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:22:27.692199 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:22:27.698468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:22:27.711088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:22:27.712823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:22:27.718729 disk-uuid[551]: Primary Header is updated. May 8 00:22:27.718729 disk-uuid[551]: Secondary Entries is updated. May 8 00:22:27.718729 disk-uuid[551]: Secondary Header is updated. May 8 00:22:27.723771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:22:27.736596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:22:28.735992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:22:28.736044 disk-uuid[552]: The operation has completed successfully. May 8 00:22:28.757174 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:22:28.757270 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:22:28.780105 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:22:28.783041 sh[573]: Success May 8 00:22:28.795987 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:22:28.822795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:22:28.833220 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:22:28.835439 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:22:28.844535 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:22:28.844571 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:22:28.844582 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:22:28.846417 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:22:28.846432 kernel: BTRFS info (device dm-0): using free space tree May 8 00:22:28.850243 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:22:28.851526 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:22:28.852245 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:22:28.855029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:22:28.865198 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:22:28.865237 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:22:28.865248 kernel: BTRFS info (device vda6): using free space tree May 8 00:22:28.869531 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:22:28.875443 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:22:28.877177 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:22:28.882446 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:22:28.888138 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:22:28.949888 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:22:28.962100 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:22:28.988823 ignition[671]: Ignition 2.19.0 May 8 00:22:28.988833 ignition[671]: Stage: fetch-offline May 8 00:22:28.988866 ignition[671]: no configs at "/usr/lib/ignition/base.d" May 8 00:22:28.988874 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:22:28.989066 ignition[671]: parsed url from cmdline: "" May 8 00:22:28.989070 ignition[671]: no config URL provided May 8 00:22:28.989074 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:22:28.993044 systemd-networkd[766]: lo: Link UP May 8 00:22:28.989080 ignition[671]: no config at "/usr/lib/ignition/user.ign" May 8 00:22:28.993048 systemd-networkd[766]: lo: Gained carrier May 8 00:22:28.989103 ignition[671]: op(1): [started] loading QEMU firmware config module May 8 00:22:28.993792 systemd-networkd[766]: Enumeration completed May 8 00:22:28.989107 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:22:28.993907 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:22:28.996950 ignition[671]: op(1): [finished] loading QEMU firmware config module May 8 00:22:28.994348 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:22:28.994352 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:22:28.995624 systemd[1]: Reached target network.target - Network. May 8 00:22:28.995775 systemd-networkd[766]: eth0: Link UP May 8 00:22:28.995778 systemd-networkd[766]: eth0: Gained carrier May 8 00:22:28.995785 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:22:29.011773 ignition[671]: parsing config with SHA512: d8c3d261830466239b8bcd4f57bf5d24ecc9b32353a5e39476d7941b46f897173bdd56655ae319e99f561f8b02bd421c912b42a9356e534ed571f76a51348615 May 8 00:22:29.014797 unknown[671]: fetched base config from "system" May 8 00:22:29.014807 unknown[671]: fetched user config from "qemu" May 8 00:22:29.015111 ignition[671]: fetch-offline: fetch-offline passed May 8 00:22:29.016005 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:22:29.015178 ignition[671]: Ignition finished successfully May 8 00:22:29.016979 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:22:29.018999 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:22:29.034101 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:22:29.044745 ignition[773]: Ignition 2.19.0 May 8 00:22:29.044755 ignition[773]: Stage: kargs May 8 00:22:29.044910 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 8 00:22:29.044919 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:22:29.045582 ignition[773]: kargs: kargs passed May 8 00:22:29.047080 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:22:29.045624 ignition[773]: Ignition finished successfully May 8 00:22:29.055110 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:22:29.064570 ignition[781]: Ignition 2.19.0 May 8 00:22:29.064579 ignition[781]: Stage: disks May 8 00:22:29.064749 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 8 00:22:29.067212 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:22:29.064758 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:22:29.068658 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:22:29.065384 ignition[781]: disks: disks passed May 8 00:22:29.070192 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:22:29.065423 ignition[781]: Ignition finished successfully May 8 00:22:29.072035 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:22:29.073766 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:22:29.075120 systemd[1]: Reached target basic.target - Basic System. May 8 00:22:29.077620 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:22:29.090845 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:22:29.094359 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:22:29.096921 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:22:29.141721 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:22:29.143186 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:22:29.142973 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:22:29.152050 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:22:29.153639 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:22:29.155058 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:22:29.155096 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:22:29.163987 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) May 8 00:22:29.164009 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:22:29.164020 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:22:29.164035 kernel: BTRFS info (device vda6): using free space tree May 8 00:22:29.155118 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:22:29.162193 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:22:29.169766 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:22:29.166048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:22:29.169898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:22:29.209680 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:22:29.212756 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 8 00:22:29.215774 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:22:29.218718 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:22:29.288090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:22:29.294055 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:22:29.295573 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:22:29.301968 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:22:29.315826 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:22:29.318055 ignition[914]: INFO : Ignition 2.19.0 May 8 00:22:29.318055 ignition[914]: INFO : Stage: mount May 8 00:22:29.319572 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:22:29.319572 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:22:29.319572 ignition[914]: INFO : mount: mount passed May 8 00:22:29.319572 ignition[914]: INFO : Ignition finished successfully May 8 00:22:29.321224 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:22:29.333080 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:22:29.843364 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:22:29.854133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:22:29.861971 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) May 8 00:22:29.864038 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:22:29.864055 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:22:29.864066 kernel: BTRFS info (device vda6): using free space tree May 8 00:22:29.866970 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:22:29.867952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:22:29.884770 ignition[945]: INFO : Ignition 2.19.0 May 8 00:22:29.884770 ignition[945]: INFO : Stage: files May 8 00:22:29.886356 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:22:29.886356 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:22:29.886356 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 8 00:22:29.889743 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:22:29.889743 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:22:29.892659 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:22:29.894037 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:22:29.894037 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:22:29.893138 unknown[945]: wrote ssh authorized keys file for user: core May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:22:29.897714 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:22:30.188511 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 8 00:22:30.525560 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:22:30.525560 ignition[945]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 8 00:22:30.529509 ignition[945]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:22:30.529509 ignition[945]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:22:30.529509 ignition[945]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 8 00:22:30.529509 ignition[945]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:22:30.550929 ignition[945]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:22:30.554787 ignition[945]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:22:30.558097 ignition[945]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:22:30.558097 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:22:30.558097 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:22:30.558097 ignition[945]: INFO : files: files passed May 8 00:22:30.558097 ignition[945]: INFO : Ignition finished successfully May 8 00:22:30.557557 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:22:30.568351 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:22:30.571049 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:22:30.573213 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:22:30.573297 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:22:30.579050 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:22:30.582559 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:22:30.582559 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:22:30.585782 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:22:30.585618 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:22:30.589404 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:22:30.605257 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:22:30.626626 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:22:30.626756 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:22:30.629053 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:22:30.630781 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:22:30.632544 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:22:30.633450 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:22:30.650610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:22:30.674261 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:22:30.683267 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:22:30.685033 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:22:30.687441 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:22:30.689248 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:22:30.689374 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:22:30.692119 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:22:30.694234 systemd[1]: Stopped target basic.target - Basic System. May 8 00:22:30.696182 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:22:30.698082 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:22:30.700380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:22:30.702842 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:22:30.704869 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:22:30.707078 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:22:30.709595 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:22:30.712290 systemd[1]: Stopped target swap.target - Swaps. May 8 00:22:30.714052 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:22:30.714184 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:22:30.716536 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:22:30.718634 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:22:30.720779 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:22:30.722519 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:22:30.723874 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:22:30.724013 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:22:30.727014 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:22:30.727145 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:22:30.729517 systemd[1]: Stopped target paths.target - Path Units. May 8 00:22:30.733116 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:22:30.734189 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:22:30.735831 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:22:30.738315 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:22:30.740081 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:22:30.740255 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:22:30.742005 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:22:30.742168 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:22:30.743750 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:22:30.744523 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:22:30.746218 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:22:30.746327 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:22:30.756158 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:22:30.757112 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:22:30.757261 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:22:30.760598 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:22:30.762179 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:22:30.762386 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:22:30.764896 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:22:30.765091 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:22:30.773851 ignition[1000]: INFO : Ignition 2.19.0 May 8 00:22:30.773851 ignition[1000]: INFO : Stage: umount May 8 00:22:30.775023 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:22:30.779496 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:22:30.779496 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:22:30.779496 ignition[1000]: INFO : umount: umount passed May 8 00:22:30.779496 ignition[1000]: INFO : Ignition finished successfully May 8 00:22:30.775119 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:22:30.778441 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:22:30.778579 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:22:30.781231 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:22:30.782205 systemd[1]: Stopped target network.target - Network. May 8 00:22:30.783298 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:22:30.783381 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:22:30.784869 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:22:30.784920 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:22:30.786666 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:22:30.786714 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:22:30.788539 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:22:30.788589 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:22:30.790535 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:22:30.792173 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:22:30.801023 systemd-networkd[766]: eth0: DHCPv6 lease lost May 8 00:22:30.802186 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:22:30.802359 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:22:30.805674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:22:30.805763 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:22:30.807287 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:22:30.807384 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:22:30.809246 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:22:30.809279 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:22:30.826095 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:22:30.826987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:22:30.827060 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:22:30.828982 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:22:30.829035 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:22:30.831921 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:22:30.831985 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:22:30.834210 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:22:30.837497 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:22:30.837593 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:22:30.841626 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:22:30.841731 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:22:30.845588 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:22:30.845772 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:22:30.847597 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:22:30.847706 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:22:30.849437 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:22:30.849500 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:22:30.850799 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:22:30.850833 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:22:30.852490 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:22:30.852545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:22:30.855308 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:22:30.855361 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:22:30.857929 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:22:30.858001 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:22:30.861790 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:22:30.863046 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:22:30.863111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:22:30.865058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:22:30.865110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:22:30.871177 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:22:30.871288 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:22:30.873494 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:22:30.883117 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:22:30.890150 systemd[1]: Switching root. May 8 00:22:30.916164 systemd-journald[237]: Journal stopped May 8 00:22:31.696099 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 8 00:22:31.696170 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:22:31.696183 kernel: SELinux: policy capability open_perms=1 May 8 00:22:31.696192 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:22:31.696216 kernel: SELinux: policy capability always_check_network=0 May 8 00:22:31.696226 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:22:31.696237 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:22:31.696248 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:22:31.696262 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:22:31.696271 kernel: audit: type=1403 audit(1746663751.048:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:22:31.696295 systemd[1]: Successfully loaded SELinux policy in 31.995ms. May 8 00:22:31.696312 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.363ms. May 8 00:22:31.696325 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:22:31.696336 systemd[1]: Detected virtualization kvm. May 8 00:22:31.696346 systemd[1]: Detected architecture arm64. May 8 00:22:31.696357 systemd[1]: Detected first boot. May 8 00:22:31.696367 systemd[1]: Initializing machine ID from VM UUID. May 8 00:22:31.696380 zram_generator::config[1045]: No configuration found. May 8 00:22:31.696391 systemd[1]: Populated /etc with preset unit settings. May 8 00:22:31.696402 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:22:31.696412 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:22:31.696426 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:22:31.696440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:22:31.696454 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:22:31.696464 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:22:31.696474 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:22:31.696563 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:22:31.696580 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:22:31.696591 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:22:31.696617 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:22:31.696667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:22:31.696681 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:22:31.696692 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:22:31.696703 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:22:31.696714 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:22:31.696725 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:22:31.696736 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:22:31.696747 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:22:31.696758 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:22:31.696770 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:22:31.696782 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:22:31.696793 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:22:31.696806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:22:31.696817 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:22:31.696827 systemd[1]: Reached target slices.target - Slice Units. May 8 00:22:31.696838 systemd[1]: Reached target swap.target - Swaps. May 8 00:22:31.696849 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:22:31.696860 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:22:31.696870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:22:31.696881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:22:31.696891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:22:31.696902 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:22:31.696912 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:22:31.696923 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:22:31.696934 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:22:31.696944 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:22:31.696967 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:22:31.696980 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:22:31.696991 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:22:31.697001 systemd[1]: Reached target machines.target - Containers. May 8 00:22:31.697011 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:22:31.697023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:22:31.697033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:22:31.697045 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:22:31.697055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:22:31.697069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:22:31.697080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:22:31.697091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:22:31.697101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:22:31.697112 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:22:31.697122 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:22:31.697132 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:22:31.697150 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:22:31.697161 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:22:31.697172 kernel: fuse: init (API version 7.39) May 8 00:22:31.697184 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:22:31.697194 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:22:31.697204 kernel: ACPI: bus type drm_connector registered May 8 00:22:31.697214 kernel: loop: module loaded May 8 00:22:31.697223 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:22:31.697233 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:22:31.697275 systemd-journald[1112]: Collecting audit messages is disabled. May 8 00:22:31.697299 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:22:31.697311 systemd-journald[1112]: Journal started May 8 00:22:31.697334 systemd-journald[1112]: Runtime Journal (/run/log/journal/01c7e033ec18460aabcdfc75684f1394) is 5.9M, max 47.3M, 41.4M free. May 8 00:22:31.463439 systemd[1]: Queued start job for default target multi-user.target. May 8 00:22:31.484982 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:22:31.485382 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:22:31.702422 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:22:31.702479 systemd[1]: Stopped verity-setup.service. May 8 00:22:31.706239 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:22:31.707000 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:22:31.708331 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:22:31.709615 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:22:31.710806 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:22:31.712094 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:22:31.713293 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:22:31.714517 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:22:31.715990 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:22:31.717479 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:22:31.717643 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:22:31.719064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:22:31.719202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:22:31.721386 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:22:31.721541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:22:31.722905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:22:31.723065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:22:31.724470 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:22:31.724613 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:22:31.725933 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:22:31.726214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:22:31.727665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:22:31.729063 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:22:31.730577 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:22:31.743717 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:22:31.755072 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:22:31.757292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:22:31.758429 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:22:31.758472 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:22:31.760561 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:22:31.762878 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:22:31.765133 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:22:31.766258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:22:31.768361 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:22:31.770493 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:22:31.771677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:22:31.772690 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:22:31.773797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:22:31.779323 systemd-journald[1112]: Time spent on flushing to /var/log/journal/01c7e033ec18460aabcdfc75684f1394 is 26.893ms for 834 entries. May 8 00:22:31.779323 systemd-journald[1112]: System Journal (/var/log/journal/01c7e033ec18460aabcdfc75684f1394) is 8.0M, max 195.6M, 187.6M free. May 8 00:22:31.822893 systemd-journald[1112]: Received client request to flush runtime journal. May 8 00:22:31.822936 kernel: loop0: detected capacity change from 0 to 194096 May 8 00:22:31.778185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:22:31.781240 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:22:31.784517 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:22:31.789628 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:22:31.791585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:22:31.792893 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:22:31.794582 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:22:31.810195 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:22:31.814436 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:22:31.816345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:22:31.820244 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:22:31.829258 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:22:31.830380 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:22:31.831009 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:22:31.833198 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:22:31.838231 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:22:31.839460 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:22:31.849324 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:22:31.850206 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:22:31.856999 kernel: loop1: detected capacity change from 0 to 114328 May 8 00:22:31.863472 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 8 00:22:31.863489 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 8 00:22:31.870160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:22:31.889220 kernel: loop2: detected capacity change from 0 to 114432 May 8 00:22:31.934003 kernel: loop3: detected capacity change from 0 to 194096 May 8 00:22:31.944986 kernel: loop4: detected capacity change from 0 to 114328 May 8 00:22:31.950975 kernel: loop5: detected capacity change from 0 to 114432 May 8 00:22:31.953897 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:22:31.954380 (sd-merge)[1180]: Merged extensions into '/usr'. May 8 00:22:31.957902 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:22:31.957917 systemd[1]: Reloading... May 8 00:22:32.018235 zram_generator::config[1203]: No configuration found. May 8 00:22:32.098607 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:22:32.130912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:22:32.176184 systemd[1]: Reloading finished in 217 ms. May 8 00:22:32.202581 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:22:32.204051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:22:32.220168 systemd[1]: Starting ensure-sysext.service... May 8 00:22:32.222133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:22:32.234863 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... May 8 00:22:32.234882 systemd[1]: Reloading... May 8 00:22:32.249452 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:22:32.249795 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:22:32.250547 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:22:32.250780 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 8 00:22:32.250837 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 8 00:22:32.253633 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:22:32.253643 systemd-tmpfiles[1241]: Skipping /boot May 8 00:22:32.261244 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:22:32.261261 systemd-tmpfiles[1241]: Skipping /boot May 8 00:22:32.286335 zram_generator::config[1268]: No configuration found. May 8 00:22:32.381791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:22:32.427430 systemd[1]: Reloading finished in 192 ms. May 8 00:22:32.446748 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:22:32.459395 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:22:32.468263 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:22:32.471227 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:22:32.473800 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:22:32.477345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:22:32.485216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:22:32.490381 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:22:32.495377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:22:32.505747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:22:32.512477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:22:32.516795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:22:32.518159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:22:32.519059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:22:32.521016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:22:32.525266 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:22:32.527173 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:22:32.527362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:22:32.531555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:22:32.531745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:22:32.537425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:22:32.551383 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:22:32.553020 systemd-udevd[1310]: Using default interface naming scheme 'v255'. May 8 00:22:32.553904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:22:32.557320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:22:32.558585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:22:32.561256 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:22:32.565195 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:22:32.567416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:22:32.568620 augenrules[1335]: No rules May 8 00:22:32.571676 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:22:32.573513 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:22:32.575418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:22:32.576215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:22:32.579170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:22:32.579326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:22:32.580918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:22:32.582705 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:22:32.582840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:22:32.584830 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:22:32.599318 systemd[1]: Finished ensure-sysext.service. May 8 00:22:32.603174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:22:32.615245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:22:32.620060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:22:32.622420 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:22:32.627747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:22:32.628914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:22:32.630571 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:22:32.634181 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:22:32.635285 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:22:32.635574 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:22:32.638581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:22:32.638753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:22:32.640306 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:22:32.640444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:22:32.641850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:22:32.642009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:22:32.643808 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:22:32.643936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:22:32.647985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1343) May 8 00:22:32.662322 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:22:32.667216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:22:32.667279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:22:32.688496 systemd-resolved[1309]: Positive Trust Anchors: May 8 00:22:32.689380 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:22:32.689499 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:22:32.698068 systemd-resolved[1309]: Defaulting to hostname 'linux'. May 8 00:22:32.701251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:22:32.711146 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:22:32.712664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:22:32.714134 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:22:32.717255 systemd-networkd[1379]: lo: Link UP May 8 00:22:32.717482 systemd-networkd[1379]: lo: Gained carrier May 8 00:22:32.718736 systemd-networkd[1379]: Enumeration completed May 8 00:22:32.718905 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:22:32.719839 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:22:32.719924 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:22:32.720156 systemd[1]: Reached target network.target - Network. May 8 00:22:32.721481 systemd-networkd[1379]: eth0: Link UP May 8 00:22:32.721550 systemd-networkd[1379]: eth0: Gained carrier May 8 00:22:32.721599 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:22:32.726125 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:22:32.734173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:22:32.735705 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:22:32.737177 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:22:32.737375 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:22:32.739295 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. May 8 00:22:32.740371 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:22:32.740430 systemd-timesyncd[1380]: Initial clock synchronization to Thu 2025-05-08 00:22:32.470133 UTC. May 8 00:22:32.765292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:22:32.776359 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:22:32.779318 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:22:32.802877 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:22:32.812068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:22:32.835260 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:22:32.836667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:22:32.837774 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:22:32.838910 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:22:32.840160 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:22:32.841578 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:22:32.842718 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:22:32.843935 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:22:32.845146 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:22:32.845181 systemd[1]: Reached target paths.target - Path Units. May 8 00:22:32.846087 systemd[1]: Reached target timers.target - Timer Units. May 8 00:22:32.847661 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:22:32.849982 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:22:32.857981 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:22:32.860107 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:22:32.861555 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:22:32.862687 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:22:32.863628 systemd[1]: Reached target basic.target - Basic System. May 8 00:22:32.864547 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:22:32.864576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:22:32.865488 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:22:32.867416 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:22:32.869352 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:22:32.871106 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:22:32.874161 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:22:32.877571 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:22:32.881163 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:22:32.882873 jq[1412]: false May 8 00:22:32.884173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:22:32.886910 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:22:32.891523 extend-filesystems[1413]: Found loop3 May 8 00:22:32.892608 extend-filesystems[1413]: Found loop4 May 8 00:22:32.892608 extend-filesystems[1413]: Found loop5 May 8 00:22:32.892608 extend-filesystems[1413]: Found vda May 8 00:22:32.892608 extend-filesystems[1413]: Found vda1 May 8 00:22:32.892608 extend-filesystems[1413]: Found vda2 May 8 00:22:32.892608 extend-filesystems[1413]: Found vda3 May 8 00:22:32.892608 extend-filesystems[1413]: Found usr May 8 00:22:32.892608 extend-filesystems[1413]: Found vda4 May 8 00:22:32.892608 extend-filesystems[1413]: Found vda6 May 8 00:22:32.892608 extend-filesystems[1413]: Found vda7 May 8 00:22:32.892608 extend-filesystems[1413]: Found vda9 May 8 00:22:32.892608 extend-filesystems[1413]: Checking size of /dev/vda9 May 8 00:22:32.902222 dbus-daemon[1411]: [system] SELinux support is enabled May 8 00:22:32.893566 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:22:32.896863 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:22:32.897404 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:22:32.899784 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:22:32.910898 jq[1427]: true May 8 00:22:32.904145 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:22:32.905785 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:22:32.911825 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:22:32.916100 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:22:32.916267 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:22:32.916526 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:22:32.916682 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:22:32.917999 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:22:32.918169 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:22:32.934213 jq[1432]: true May 8 00:22:32.936043 extend-filesystems[1413]: Resized partition /dev/vda9 May 8 00:22:32.939143 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) May 8 00:22:32.938684 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:22:32.938709 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:22:32.941212 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:22:32.941240 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:22:32.954028 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) May 8 00:22:32.956294 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:22:32.967797 update_engine[1425]: I20250508 00:22:32.965595 1425 main.cc:92] Flatcar Update Engine starting May 8 00:22:32.969975 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:22:32.972233 systemd[1]: Started update-engine.service - Update Engine. May 8 00:22:32.972383 update_engine[1425]: I20250508 00:22:32.972337 1425 update_check_scheduler.cc:74] Next update check in 2m25s May 8 00:22:32.984158 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:22:32.998584 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:22:32.998781 systemd-logind[1418]: New seat seat0. May 8 00:22:32.999376 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:22:33.005985 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:22:33.022210 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:22:33.022210 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:22:33.022210 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:22:33.026490 extend-filesystems[1413]: Resized filesystem in /dev/vda9 May 8 00:22:33.024998 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:22:33.028759 bash[1461]: Updated "/home/core/.ssh/authorized_keys" May 8 00:22:33.025202 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:22:33.030738 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:22:33.034803 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:22:33.038439 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:22:33.136012 containerd[1441]: time="2025-05-08T00:22:33.135862700Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:22:33.159129 containerd[1441]: time="2025-05-08T00:22:33.159068293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:22:33.160920 containerd[1441]: time="2025-05-08T00:22:33.160351178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:22:33.160920 containerd[1441]: time="2025-05-08T00:22:33.160545777Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:22:33.160920 containerd[1441]: time="2025-05-08T00:22:33.160598572Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:22:33.160920 containerd[1441]: time="2025-05-08T00:22:33.160784359Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:22:33.160920 containerd[1441]: time="2025-05-08T00:22:33.160806968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:22:33.160920 containerd[1441]: time="2025-05-08T00:22:33.160872827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:22:33.160920 containerd[1441]: time="2025-05-08T00:22:33.160887513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:22:33.161451 containerd[1441]: time="2025-05-08T00:22:33.161412679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:22:33.161451 containerd[1441]: time="2025-05-08T00:22:33.161449899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:22:33.161505 containerd[1441]: time="2025-05-08T00:22:33.161465242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:22:33.161505 containerd[1441]: time="2025-05-08T00:22:33.161475523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:22:33.161568 containerd[1441]: time="2025-05-08T00:22:33.161550928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:22:33.161768 containerd[1441]: time="2025-05-08T00:22:33.161747884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:22:33.161868 containerd[1441]: time="2025-05-08T00:22:33.161847793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:22:33.161868 containerd[1441]: time="2025-05-08T00:22:33.161864605Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:22:33.161980 containerd[1441]: time="2025-05-08T00:22:33.161948706Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:22:33.162051 containerd[1441]: time="2025-05-08T00:22:33.162021792Z" level=info msg="metadata content store policy set" policy=shared May 8 00:22:33.165941 containerd[1441]: time="2025-05-08T00:22:33.165900090Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:22:33.165941 containerd[1441]: time="2025-05-08T00:22:33.165960305Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:22:33.165941 containerd[1441]: time="2025-05-08T00:22:33.165977620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:22:33.166074 containerd[1441]: time="2025-05-08T00:22:33.165992616Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:22:33.166074 containerd[1441]: time="2025-05-08T00:22:33.166005834Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:22:33.166169 containerd[1441]: time="2025-05-08T00:22:33.166150266Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:22:33.166381 containerd[1441]: time="2025-05-08T00:22:33.166353175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:22:33.166471 containerd[1441]: time="2025-05-08T00:22:33.166453702Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:22:33.166491 containerd[1441]: time="2025-05-08T00:22:33.166474688Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:22:33.166509 containerd[1441]: time="2025-05-08T00:22:33.166489684Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:22:33.166509 containerd[1441]: time="2025-05-08T00:22:33.166503714Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166547 containerd[1441]: time="2025-05-08T00:22:33.166515231Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166547 containerd[1441]: time="2025-05-08T00:22:33.166527135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166547 containerd[1441]: time="2025-05-08T00:22:33.166539928Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166593 containerd[1441]: time="2025-05-08T00:22:33.166553069Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166593 containerd[1441]: time="2025-05-08T00:22:33.166565244Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166593 containerd[1441]: time="2025-05-08T00:22:33.166576529Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166593 containerd[1441]: time="2025-05-08T00:22:33.166587235Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:22:33.166658 containerd[1441]: time="2025-05-08T00:22:33.166605246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166658 containerd[1441]: time="2025-05-08T00:22:33.166618386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166658 containerd[1441]: time="2025-05-08T00:22:33.166630174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166658 containerd[1441]: time="2025-05-08T00:22:33.166641692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166658 containerd[1441]: time="2025-05-08T00:22:33.166653171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166739 containerd[1441]: time="2025-05-08T00:22:33.166665268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166739 containerd[1441]: time="2025-05-08T00:22:33.166676399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166739 containerd[1441]: time="2025-05-08T00:22:33.166695066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166739 containerd[1441]: time="2025-05-08T00:22:33.166707705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166739 containerd[1441]: time="2025-05-08T00:22:33.166720459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166739 containerd[1441]: time="2025-05-08T00:22:33.166731745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166839 containerd[1441]: time="2025-05-08T00:22:33.166742760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166839 containerd[1441]: time="2025-05-08T00:22:33.166754509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166839 containerd[1441]: time="2025-05-08T00:22:33.166768693Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:22:33.166839 containerd[1441]: time="2025-05-08T00:22:33.166787786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166839 containerd[1441]: time="2025-05-08T00:22:33.166798685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:22:33.166839 containerd[1441]: time="2025-05-08T00:22:33.166808425Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:22:33.166964 containerd[1441]: time="2025-05-08T00:22:33.166917493Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:22:33.166964 containerd[1441]: time="2025-05-08T00:22:33.166942847Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:22:33.167002 containerd[1441]: time="2025-05-08T00:22:33.166982501Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:22:33.167002 containerd[1441]: time="2025-05-08T00:22:33.166995101Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:22:33.167043 containerd[1441]: time="2025-05-08T00:22:33.167004029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:22:33.167043 containerd[1441]: time="2025-05-08T00:22:33.167015391Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:22:33.167043 containerd[1441]: time="2025-05-08T00:22:33.167024551Z" level=info msg="NRI interface is disabled by configuration." May 8 00:22:33.167043 containerd[1441]: time="2025-05-08T00:22:33.167033673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:22:33.167416 containerd[1441]: time="2025-05-08T00:22:33.167350944Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:22:33.167416 containerd[1441]: time="2025-05-08T00:22:33.167411082Z" level=info msg="Connect containerd service" May 8 00:22:33.167545 containerd[1441]: time="2025-05-08T00:22:33.167506585Z" level=info msg="using legacy CRI server" May 8 00:22:33.167545 containerd[1441]: time="2025-05-08T00:22:33.167513464Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:22:33.167620 containerd[1441]: time="2025-05-08T00:22:33.167603556Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:22:33.168243 containerd[1441]: time="2025-05-08T00:22:33.168213905Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:22:33.168667 containerd[1441]: time="2025-05-08T00:22:33.168640013Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:22:33.168714 containerd[1441]: time="2025-05-08T00:22:33.168676343Z" level=info msg="Start subscribing containerd event" May 8 00:22:33.168736 containerd[1441]: time="2025-05-08T00:22:33.168683996Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:22:33.168736 containerd[1441]: time="2025-05-08T00:22:33.168729795Z" level=info msg="Start recovering state" May 8 00:22:33.168816 containerd[1441]: time="2025-05-08T00:22:33.168801799Z" level=info msg="Start event monitor" May 8 00:22:33.168835 containerd[1441]: time="2025-05-08T00:22:33.168819268Z" level=info msg="Start snapshots syncer" May 8 00:22:33.168835 containerd[1441]: time="2025-05-08T00:22:33.168829124Z" level=info msg="Start cni network conf syncer for default" May 8 00:22:33.168873 containerd[1441]: time="2025-05-08T00:22:33.168837047Z" level=info msg="Start streaming server" May 8 00:22:33.168990 containerd[1441]: time="2025-05-08T00:22:33.168978348Z" level=info msg="containerd successfully booted in 0.034384s" May 8 00:22:33.169086 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:22:33.369622 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:22:33.387980 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:22:33.402222 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:22:33.407538 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:22:33.409001 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:22:33.411640 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:22:33.422469 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:22:33.434223 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:22:33.436147 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:22:33.437376 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:22:34.451094 systemd-networkd[1379]: eth0: Gained IPv6LL May 8 00:22:34.453732 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:22:34.455599 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:22:34.473181 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:22:34.475482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:34.477534 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:22:34.492043 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:22:34.492213 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:22:34.494168 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:22:34.504015 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:22:35.049043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:35.050603 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:22:35.054021 (kubelet)[1518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:22:35.056396 systemd[1]: Startup finished in 587ms (kernel) + 4.329s (initrd) + 4.040s (userspace) = 8.957s. May 8 00:22:35.485670 kubelet[1518]: E0508 00:22:35.485571 1518 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:22:35.488254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:22:35.488419 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:22:39.310871 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:22:39.312063 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:51354.service - OpenSSH per-connection server daemon (10.0.0.1:51354). May 8 00:22:39.356128 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 51354 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:39.358640 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:39.369014 systemd-logind[1418]: New session 1 of user core. May 8 00:22:39.370124 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:22:39.384216 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:22:39.393491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:22:39.395841 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:22:39.402410 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:22:39.474041 systemd[1537]: Queued start job for default target default.target. May 8 00:22:39.481857 systemd[1537]: Created slice app.slice - User Application Slice. May 8 00:22:39.481887 systemd[1537]: Reached target paths.target - Paths. May 8 00:22:39.481899 systemd[1537]: Reached target timers.target - Timers. May 8 00:22:39.483175 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:22:39.492548 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:22:39.492611 systemd[1537]: Reached target sockets.target - Sockets. May 8 00:22:39.492623 systemd[1537]: Reached target basic.target - Basic System. May 8 00:22:39.492658 systemd[1537]: Reached target default.target - Main User Target. May 8 00:22:39.492683 systemd[1537]: Startup finished in 85ms. May 8 00:22:39.493006 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:22:39.494220 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:22:39.564682 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:51368.service - OpenSSH per-connection server daemon (10.0.0.1:51368). May 8 00:22:39.596016 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 51368 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:39.597334 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:39.601877 systemd-logind[1418]: New session 2 of user core. May 8 00:22:39.616110 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:22:39.666722 sshd[1548]: pam_unix(sshd:session): session closed for user core May 8 00:22:39.677326 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:51368.service: Deactivated successfully. May 8 00:22:39.680488 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:22:39.681532 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. May 8 00:22:39.682566 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:51384.service - OpenSSH per-connection server daemon (10.0.0.1:51384). May 8 00:22:39.683294 systemd-logind[1418]: Removed session 2. May 8 00:22:39.714073 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 51384 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:39.715303 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:39.719430 systemd-logind[1418]: New session 3 of user core. May 8 00:22:39.728104 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:22:39.775491 sshd[1555]: pam_unix(sshd:session): session closed for user core May 8 00:22:39.785203 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:51384.service: Deactivated successfully. May 8 00:22:39.786460 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:22:39.788946 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. May 8 00:22:39.790069 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:51396.service - OpenSSH per-connection server daemon (10.0.0.1:51396). May 8 00:22:39.790774 systemd-logind[1418]: Removed session 3. May 8 00:22:39.821464 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 51396 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:39.823013 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:39.827024 systemd-logind[1418]: New session 4 of user core. May 8 00:22:39.838120 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:22:39.890597 sshd[1562]: pam_unix(sshd:session): session closed for user core May 8 00:22:39.900315 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:51396.service: Deactivated successfully. May 8 00:22:39.901611 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:22:39.904897 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. May 8 00:22:39.924333 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:51398.service - OpenSSH per-connection server daemon (10.0.0.1:51398). May 8 00:22:39.925287 systemd-logind[1418]: Removed session 4. May 8 00:22:39.951796 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 51398 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:39.953041 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:39.957018 systemd-logind[1418]: New session 5 of user core. May 8 00:22:39.971093 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:22:40.041433 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:22:40.041712 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:22:40.054671 sudo[1572]: pam_unix(sudo:session): session closed for user root May 8 00:22:40.056460 sshd[1569]: pam_unix(sshd:session): session closed for user core May 8 00:22:40.065304 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:51398.service: Deactivated successfully. May 8 00:22:40.066694 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:22:40.069056 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. May 8 00:22:40.079238 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:51414.service - OpenSSH per-connection server daemon (10.0.0.1:51414). May 8 00:22:40.080806 systemd-logind[1418]: Removed session 5. May 8 00:22:40.107137 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 51414 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:40.108387 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:40.112993 systemd-logind[1418]: New session 6 of user core. May 8 00:22:40.127146 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:22:40.177503 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:22:40.177805 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:22:40.180807 sudo[1581]: pam_unix(sudo:session): session closed for user root May 8 00:22:40.185596 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:22:40.185858 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:22:40.206203 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:22:40.207436 auditctl[1584]: No rules May 8 00:22:40.208359 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:22:40.210002 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:22:40.211606 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:22:40.234208 augenrules[1602]: No rules May 8 00:22:40.235406 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:22:40.236702 sudo[1580]: pam_unix(sudo:session): session closed for user root May 8 00:22:40.238341 sshd[1577]: pam_unix(sshd:session): session closed for user core May 8 00:22:40.249446 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:51414.service: Deactivated successfully. May 8 00:22:40.251120 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:22:40.251717 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. May 8 00:22:40.253614 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:51420.service - OpenSSH per-connection server daemon (10.0.0.1:51420). May 8 00:22:40.254427 systemd-logind[1418]: Removed session 6. May 8 00:22:40.284798 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 51420 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:40.286154 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:40.289996 systemd-logind[1418]: New session 7 of user core. May 8 00:22:40.303152 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:22:40.352389 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:22:40.353000 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:22:40.371277 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:22:40.385689 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:22:40.385883 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:22:40.874314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:40.883269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:40.900484 systemd[1]: Reloading requested from client PID 1662 ('systemctl') (unit session-7.scope)... May 8 00:22:40.900504 systemd[1]: Reloading... May 8 00:22:40.965408 zram_generator::config[1697]: No configuration found. May 8 00:22:41.152344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:22:41.216018 systemd[1]: Reloading finished in 315 ms. May 8 00:22:41.258088 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:22:41.258153 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:22:41.258411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:41.260873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:41.354941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:41.358852 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:22:41.395816 kubelet[1746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:22:41.395816 kubelet[1746]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:22:41.395816 kubelet[1746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:22:41.396812 kubelet[1746]: I0508 00:22:41.396754 1746 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:22:42.189658 kubelet[1746]: I0508 00:22:42.189600 1746 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:22:42.189658 kubelet[1746]: I0508 00:22:42.189631 1746 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:22:42.191212 kubelet[1746]: I0508 00:22:42.191159 1746 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:22:42.229500 kubelet[1746]: I0508 00:22:42.229322 1746 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:22:42.242236 kubelet[1746]: I0508 00:22:42.242206 1746 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:22:42.243341 kubelet[1746]: I0508 00:22:42.243286 1746 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:22:42.243505 kubelet[1746]: I0508 00:22:42.243337 1746 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:22:42.243583 kubelet[1746]: I0508 00:22:42.243571 1746 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:22:42.243583 kubelet[1746]: I0508 00:22:42.243580 1746 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:22:42.243850 kubelet[1746]: I0508 00:22:42.243826 1746 state_mem.go:36] "Initialized new in-memory state store" May 8 00:22:42.244694 kubelet[1746]: I0508 00:22:42.244671 1746 kubelet.go:400] "Attempting to sync node with API server" May 8 00:22:42.244694 kubelet[1746]: I0508 00:22:42.244697 1746 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:22:42.244971 kubelet[1746]: I0508 00:22:42.244953 1746 kubelet.go:312] "Adding apiserver pod source" May 8 00:22:42.245562 kubelet[1746]: I0508 00:22:42.245171 1746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:22:42.245562 kubelet[1746]: E0508 00:22:42.245319 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:42.245562 kubelet[1746]: E0508 00:22:42.245383 1746 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:42.248027 kubelet[1746]: I0508 00:22:42.248001 1746 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:22:42.248425 kubelet[1746]: I0508 00:22:42.248402 1746 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:22:42.248474 kubelet[1746]: W0508 00:22:42.248448 1746 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:22:42.252192 kubelet[1746]: I0508 00:22:42.249340 1746 server.go:1264] "Started kubelet" May 8 00:22:42.252192 kubelet[1746]: I0508 00:22:42.250929 1746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:22:42.252686 kubelet[1746]: I0508 00:22:42.252660 1746 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:22:42.253108 kubelet[1746]: W0508 00:22:42.253074 1746 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.65" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 00:22:42.253108 kubelet[1746]: E0508 00:22:42.253108 1746 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.65" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 00:22:42.253351 kubelet[1746]: E0508 00:22:42.253133 1746 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.65.183d657002eae9b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.65,UID:10.0.0.65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.65,},FirstTimestamp:2025-05-08 00:22:42.249312694 +0000 UTC m=+0.887545245,LastTimestamp:2025-05-08 00:22:42.249312694 +0000 UTC m=+0.887545245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.65,}" May 8 00:22:42.253848 kubelet[1746]: W0508 00:22:42.253825 1746 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 00:22:42.253907 kubelet[1746]: E0508 00:22:42.253851 1746 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 00:22:42.254129 kubelet[1746]: I0508 00:22:42.254110 1746 server.go:455] "Adding debug handlers to kubelet server" May 8 00:22:42.257326 kubelet[1746]: I0508 00:22:42.257287 1746 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:22:42.257603 kubelet[1746]: I0508 00:22:42.257539 1746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:22:42.257804 kubelet[1746]: I0508 00:22:42.257780 1746 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:22:42.259240 kubelet[1746]: I0508 00:22:42.258170 1746 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:22:42.260478 kubelet[1746]: I0508 00:22:42.260360 1746 reconciler.go:26] "Reconciler: start to sync state" May 8 00:22:42.260741 kubelet[1746]: I0508 00:22:42.260720 1746 factory.go:221] Registration of the systemd container factory successfully May 8 00:22:42.260901 kubelet[1746]: I0508 00:22:42.260875 1746 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:22:42.263312 kubelet[1746]: I0508 00:22:42.263280 1746 factory.go:221] Registration of the containerd container factory successfully May 8 00:22:42.263865 kubelet[1746]: E0508 00:22:42.263843 1746 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:22:42.264567 kubelet[1746]: W0508 00:22:42.264543 1746 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 8 00:22:42.264657 kubelet[1746]: E0508 00:22:42.264645 1746 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 8 00:22:42.264788 kubelet[1746]: E0508 00:22:42.264769 1746 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.65\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 8 00:22:42.271369 kubelet[1746]: E0508 00:22:42.271252 1746 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.65.183d657003c85ba7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.65,UID:10.0.0.65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.65,},FirstTimestamp:2025-05-08 00:22:42.263825319 +0000 UTC m=+0.902057869,LastTimestamp:2025-05-08 00:22:42.263825319 +0000 UTC m=+0.902057869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.65,}" May 8 00:22:42.272823 kubelet[1746]: I0508 00:22:42.272728 1746 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:22:42.272823 kubelet[1746]: I0508 00:22:42.272743 1746 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:22:42.272823 kubelet[1746]: I0508 00:22:42.272761 1746 state_mem.go:36] "Initialized new in-memory state store" May 8 00:22:42.335309 kubelet[1746]: I0508 00:22:42.335276 1746 policy_none.go:49] "None policy: Start" May 8 00:22:42.336225 kubelet[1746]: I0508 00:22:42.336128 1746 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:22:42.336225 kubelet[1746]: I0508 00:22:42.336154 1746 state_mem.go:35] "Initializing new in-memory state store" May 8 00:22:42.343650 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:22:42.357600 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:22:42.361876 kubelet[1746]: I0508 00:22:42.361204 1746 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.65" May 8 00:22:42.362496 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:22:42.363152 kubelet[1746]: I0508 00:22:42.363106 1746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:22:42.364304 kubelet[1746]: I0508 00:22:42.364275 1746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:22:42.364527 kubelet[1746]: I0508 00:22:42.364369 1746 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:22:42.364527 kubelet[1746]: I0508 00:22:42.364387 1746 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:22:42.364527 kubelet[1746]: E0508 00:22:42.364433 1746 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:22:42.368312 kubelet[1746]: I0508 00:22:42.368289 1746 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.65" May 8 00:22:42.371142 kubelet[1746]: I0508 00:22:42.371102 1746 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:22:42.371384 kubelet[1746]: I0508 00:22:42.371339 1746 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:22:42.371911 kubelet[1746]: I0508 00:22:42.371458 1746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:22:42.374169 kubelet[1746]: E0508 00:22:42.374128 1746 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.65\" not found" May 8 00:22:42.389243 sudo[1613]: pam_unix(sudo:session): session closed for user root May 8 00:22:42.391151 sshd[1610]: pam_unix(sshd:session): session closed for user core May 8 00:22:42.393634 kubelet[1746]: E0508 00:22:42.393599 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:42.394532 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:51420.service: Deactivated successfully. May 8 00:22:42.396465 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:22:42.397342 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. May 8 00:22:42.398229 systemd-logind[1418]: Removed session 7. May 8 00:22:42.494647 kubelet[1746]: E0508 00:22:42.494512 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:42.596690 kubelet[1746]: E0508 00:22:42.595112 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:42.695645 kubelet[1746]: E0508 00:22:42.695566 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:42.796370 kubelet[1746]: E0508 00:22:42.796248 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:42.896791 kubelet[1746]: E0508 00:22:42.896738 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:42.997277 kubelet[1746]: E0508 00:22:42.997235 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:43.097766 kubelet[1746]: E0508 00:22:43.097667 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:43.193347 kubelet[1746]: I0508 00:22:43.193265 1746 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 8 00:22:43.193676 kubelet[1746]: E0508 00:22:43.193364 1746 request.go:1116] Unexpected error when reading response body: read tcp 10.0.0.65:37788->10.0.0.58:6443: use of closed network connection May 8 00:22:43.193676 kubelet[1746]: W0508 00:22:43.193407 1746 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: unexpected error when reading response body. Please retry. Original error: read tcp 10.0.0.65:37788->10.0.0.58:6443: use of closed network connection May 8 00:22:43.193676 kubelet[1746]: E0508 00:22:43.193421 1746 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: unexpected error when reading response body. Please retry. Original error: read tcp 10.0.0.65:37788->10.0.0.58:6443: use of closed network connection May 8 00:22:43.193676 kubelet[1746]: W0508 00:22:43.193530 1746 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 00:22:43.198405 kubelet[1746]: E0508 00:22:43.198375 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:43.245785 kubelet[1746]: E0508 00:22:43.245738 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:43.298978 kubelet[1746]: E0508 00:22:43.298912 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:43.399775 kubelet[1746]: E0508 00:22:43.399655 1746 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.65\" not found" May 8 00:22:43.501427 kubelet[1746]: I0508 00:22:43.501334 1746 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 8 00:22:43.502142 containerd[1441]: time="2025-05-08T00:22:43.502043640Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:22:43.502416 kubelet[1746]: I0508 00:22:43.502233 1746 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 8 00:22:44.246415 kubelet[1746]: I0508 00:22:44.246354 1746 apiserver.go:52] "Watching apiserver" May 8 00:22:44.246415 kubelet[1746]: E0508 00:22:44.246379 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:44.258152 kubelet[1746]: I0508 00:22:44.258113 1746 topology_manager.go:215] "Topology Admit Handler" podUID="78f5beb4-29d2-41a1-bfe5-b1cf061196b2" podNamespace="calico-system" podName="calico-node-27bqw" May 8 00:22:44.258248 kubelet[1746]: I0508 00:22:44.258223 1746 topology_manager.go:215] "Topology Admit Handler" podUID="188278af-6012-468a-b809-e6cd106f483f" podNamespace="calico-system" podName="csi-node-driver-wpp8d" May 8 00:22:44.258871 kubelet[1746]: I0508 00:22:44.258837 1746 topology_manager.go:215] "Topology Admit Handler" podUID="942ad917-54f4-4f75-8f12-0b1cd1f1f20c" podNamespace="kube-system" podName="kube-proxy-zbr7g" May 8 00:22:44.259528 kubelet[1746]: E0508 00:22:44.259176 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpp8d" podUID="188278af-6012-468a-b809-e6cd106f483f" May 8 00:22:44.264632 systemd[1]: Created slice kubepods-besteffort-pod78f5beb4_29d2_41a1_bfe5_b1cf061196b2.slice - libcontainer container kubepods-besteffort-pod78f5beb4_29d2_41a1_bfe5_b1cf061196b2.slice. May 8 00:22:44.268980 kubelet[1746]: I0508 00:22:44.268914 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-tigera-ca-bundle\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.268980 kubelet[1746]: I0508 00:22:44.268960 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-cni-bin-dir\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.268980 kubelet[1746]: I0508 00:22:44.268980 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/188278af-6012-468a-b809-e6cd106f483f-kubelet-dir\") pod \"csi-node-driver-wpp8d\" (UID: \"188278af-6012-468a-b809-e6cd106f483f\") " pod="calico-system/csi-node-driver-wpp8d" May 8 00:22:44.269110 kubelet[1746]: I0508 00:22:44.269018 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/188278af-6012-468a-b809-e6cd106f483f-socket-dir\") pod \"csi-node-driver-wpp8d\" (UID: \"188278af-6012-468a-b809-e6cd106f483f\") " pod="calico-system/csi-node-driver-wpp8d" May 8 00:22:44.269110 kubelet[1746]: I0508 00:22:44.269059 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8wg\" (UniqueName: \"kubernetes.io/projected/188278af-6012-468a-b809-e6cd106f483f-kube-api-access-dj8wg\") pod \"csi-node-driver-wpp8d\" (UID: \"188278af-6012-468a-b809-e6cd106f483f\") " pod="calico-system/csi-node-driver-wpp8d" May 8 00:22:44.269110 kubelet[1746]: I0508 00:22:44.269082 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-xtables-lock\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269287 kubelet[1746]: I0508 00:22:44.269097 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-cni-log-dir\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269318 kubelet[1746]: I0508 00:22:44.269297 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-flexvol-driver-host\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269344 kubelet[1746]: I0508 00:22:44.269326 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5w46\" (UniqueName: \"kubernetes.io/projected/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-kube-api-access-t5w46\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269482 kubelet[1746]: I0508 00:22:44.269459 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-var-lib-calico\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269512 kubelet[1746]: I0508 00:22:44.269492 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-cni-net-dir\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269534 kubelet[1746]: I0508 00:22:44.269514 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/188278af-6012-468a-b809-e6cd106f483f-varrun\") pod \"csi-node-driver-wpp8d\" (UID: \"188278af-6012-468a-b809-e6cd106f483f\") " pod="calico-system/csi-node-driver-wpp8d" May 8 00:22:44.269534 kubelet[1746]: I0508 00:22:44.269531 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/188278af-6012-468a-b809-e6cd106f483f-registration-dir\") pod \"csi-node-driver-wpp8d\" (UID: \"188278af-6012-468a-b809-e6cd106f483f\") " pod="calico-system/csi-node-driver-wpp8d" May 8 00:22:44.269583 kubelet[1746]: I0508 00:22:44.269557 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-lib-modules\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269583 kubelet[1746]: I0508 00:22:44.269578 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-policysync\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269624 kubelet[1746]: I0508 00:22:44.269600 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-node-certs\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.269624 kubelet[1746]: I0508 00:22:44.269619 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78f5beb4-29d2-41a1-bfe5-b1cf061196b2-var-run-calico\") pod \"calico-node-27bqw\" (UID: \"78f5beb4-29d2-41a1-bfe5-b1cf061196b2\") " pod="calico-system/calico-node-27bqw" May 8 00:22:44.277886 systemd[1]: Created slice kubepods-besteffort-pod942ad917_54f4_4f75_8f12_0b1cd1f1f20c.slice - libcontainer container kubepods-besteffort-pod942ad917_54f4_4f75_8f12_0b1cd1f1f20c.slice. May 8 00:22:44.359203 kubelet[1746]: I0508 00:22:44.359162 1746 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:22:44.370902 kubelet[1746]: I0508 00:22:44.370346 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/942ad917-54f4-4f75-8f12-0b1cd1f1f20c-kube-proxy\") pod \"kube-proxy-zbr7g\" (UID: \"942ad917-54f4-4f75-8f12-0b1cd1f1f20c\") " pod="kube-system/kube-proxy-zbr7g" May 8 00:22:44.370902 kubelet[1746]: I0508 00:22:44.370402 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vz6n\" (UniqueName: \"kubernetes.io/projected/942ad917-54f4-4f75-8f12-0b1cd1f1f20c-kube-api-access-6vz6n\") pod \"kube-proxy-zbr7g\" (UID: \"942ad917-54f4-4f75-8f12-0b1cd1f1f20c\") " pod="kube-system/kube-proxy-zbr7g" May 8 00:22:44.370902 kubelet[1746]: I0508 00:22:44.370519 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/942ad917-54f4-4f75-8f12-0b1cd1f1f20c-lib-modules\") pod \"kube-proxy-zbr7g\" (UID: \"942ad917-54f4-4f75-8f12-0b1cd1f1f20c\") " pod="kube-system/kube-proxy-zbr7g" May 8 00:22:44.370902 kubelet[1746]: I0508 00:22:44.370657 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/942ad917-54f4-4f75-8f12-0b1cd1f1f20c-xtables-lock\") pod \"kube-proxy-zbr7g\" (UID: \"942ad917-54f4-4f75-8f12-0b1cd1f1f20c\") " pod="kube-system/kube-proxy-zbr7g" May 8 00:22:44.371400 kubelet[1746]: E0508 00:22:44.371380 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.371437 kubelet[1746]: W0508 00:22:44.371400 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.371437 kubelet[1746]: E0508 00:22:44.371425 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.371614 kubelet[1746]: E0508 00:22:44.371598 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.371614 kubelet[1746]: W0508 00:22:44.371613 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.371669 kubelet[1746]: E0508 00:22:44.371628 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.371862 kubelet[1746]: E0508 00:22:44.371845 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.371903 kubelet[1746]: W0508 00:22:44.371862 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.371903 kubelet[1746]: E0508 00:22:44.371878 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.372123 kubelet[1746]: E0508 00:22:44.372112 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.372123 kubelet[1746]: W0508 00:22:44.372123 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.372687 kubelet[1746]: E0508 00:22:44.372136 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.372687 kubelet[1746]: E0508 00:22:44.372280 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.372687 kubelet[1746]: W0508 00:22:44.372287 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.372687 kubelet[1746]: E0508 00:22:44.372295 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.372687 kubelet[1746]: E0508 00:22:44.372487 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.372687 kubelet[1746]: W0508 00:22:44.372498 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.372687 kubelet[1746]: E0508 00:22:44.372517 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.375076 kubelet[1746]: E0508 00:22:44.374965 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.375076 kubelet[1746]: W0508 00:22:44.374986 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.375076 kubelet[1746]: E0508 00:22:44.375030 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.375690 kubelet[1746]: E0508 00:22:44.375185 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.375690 kubelet[1746]: W0508 00:22:44.375194 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.375690 kubelet[1746]: E0508 00:22:44.375277 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.375690 kubelet[1746]: E0508 00:22:44.375330 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.375690 kubelet[1746]: W0508 00:22:44.375336 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.375690 kubelet[1746]: E0508 00:22:44.375468 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.375690 kubelet[1746]: E0508 00:22:44.375471 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.375690 kubelet[1746]: W0508 00:22:44.375501 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.375690 kubelet[1746]: E0508 00:22:44.375558 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.375931 kubelet[1746]: E0508 00:22:44.375915 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.376001 kubelet[1746]: W0508 00:22:44.375989 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.376098 kubelet[1746]: E0508 00:22:44.376072 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.376315 kubelet[1746]: E0508 00:22:44.376298 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.376391 kubelet[1746]: W0508 00:22:44.376378 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.376469 kubelet[1746]: E0508 00:22:44.376450 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.376679 kubelet[1746]: E0508 00:22:44.376664 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.376740 kubelet[1746]: W0508 00:22:44.376728 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.376813 kubelet[1746]: E0508 00:22:44.376795 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.377049 kubelet[1746]: E0508 00:22:44.377032 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.377171 kubelet[1746]: W0508 00:22:44.377156 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.377255 kubelet[1746]: E0508 00:22:44.377235 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.377444 kubelet[1746]: E0508 00:22:44.377430 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.377508 kubelet[1746]: W0508 00:22:44.377496 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.377590 kubelet[1746]: E0508 00:22:44.377572 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.377785 kubelet[1746]: E0508 00:22:44.377770 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.377854 kubelet[1746]: W0508 00:22:44.377842 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.377924 kubelet[1746]: E0508 00:22:44.377907 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.378162 kubelet[1746]: E0508 00:22:44.378147 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.378225 kubelet[1746]: W0508 00:22:44.378212 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.378371 kubelet[1746]: E0508 00:22:44.378348 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.378574 kubelet[1746]: E0508 00:22:44.378451 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.378574 kubelet[1746]: W0508 00:22:44.378463 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.378671 kubelet[1746]: E0508 00:22:44.378649 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.378756 kubelet[1746]: E0508 00:22:44.378742 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.378805 kubelet[1746]: W0508 00:22:44.378794 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.378933 kubelet[1746]: E0508 00:22:44.378908 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.379104 kubelet[1746]: E0508 00:22:44.379091 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.379262 kubelet[1746]: W0508 00:22:44.379175 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.379262 kubelet[1746]: E0508 00:22:44.379208 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.379454 kubelet[1746]: E0508 00:22:44.379369 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.379454 kubelet[1746]: W0508 00:22:44.379382 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.379454 kubelet[1746]: E0508 00:22:44.379408 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.379601 kubelet[1746]: E0508 00:22:44.379587 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.379664 kubelet[1746]: W0508 00:22:44.379652 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.379814 kubelet[1746]: E0508 00:22:44.379785 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.379980 kubelet[1746]: E0508 00:22:44.379966 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.380118 kubelet[1746]: W0508 00:22:44.380033 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.380170 kubelet[1746]: E0508 00:22:44.380119 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.380425 kubelet[1746]: E0508 00:22:44.380329 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.380425 kubelet[1746]: W0508 00:22:44.380344 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.380636 kubelet[1746]: E0508 00:22:44.380607 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.380814 kubelet[1746]: E0508 00:22:44.380795 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.381095 kubelet[1746]: W0508 00:22:44.380923 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.381095 kubelet[1746]: E0508 00:22:44.380980 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.381303 kubelet[1746]: E0508 00:22:44.381289 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.381368 kubelet[1746]: W0508 00:22:44.381355 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.381474 kubelet[1746]: E0508 00:22:44.381450 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.381715 kubelet[1746]: E0508 00:22:44.381639 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.381715 kubelet[1746]: W0508 00:22:44.381652 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.381715 kubelet[1746]: E0508 00:22:44.381679 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.382016 kubelet[1746]: E0508 00:22:44.381901 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.382016 kubelet[1746]: W0508 00:22:44.381911 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.382121 kubelet[1746]: E0508 00:22:44.382104 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.382195 kubelet[1746]: E0508 00:22:44.382182 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.382314 kubelet[1746]: W0508 00:22:44.382236 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.382314 kubelet[1746]: E0508 00:22:44.382275 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.382500 kubelet[1746]: E0508 00:22:44.382487 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.382647 kubelet[1746]: W0508 00:22:44.382565 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.382647 kubelet[1746]: E0508 00:22:44.382633 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.382806 kubelet[1746]: E0508 00:22:44.382795 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.382930 kubelet[1746]: W0508 00:22:44.382863 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.383066 kubelet[1746]: E0508 00:22:44.383000 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.383255 kubelet[1746]: E0508 00:22:44.383153 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.383255 kubelet[1746]: W0508 00:22:44.383165 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.383255 kubelet[1746]: E0508 00:22:44.383191 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.383623 kubelet[1746]: E0508 00:22:44.383433 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.383623 kubelet[1746]: W0508 00:22:44.383444 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.383623 kubelet[1746]: E0508 00:22:44.383530 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.383791 kubelet[1746]: E0508 00:22:44.383777 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.383845 kubelet[1746]: W0508 00:22:44.383834 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.383921 kubelet[1746]: E0508 00:22:44.383902 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.384122 kubelet[1746]: E0508 00:22:44.384109 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.384194 kubelet[1746]: W0508 00:22:44.384182 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.384263 kubelet[1746]: E0508 00:22:44.384247 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.384560 kubelet[1746]: E0508 00:22:44.384454 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.384560 kubelet[1746]: W0508 00:22:44.384467 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.384560 kubelet[1746]: E0508 00:22:44.384490 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.384811 kubelet[1746]: E0508 00:22:44.384697 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.384896 kubelet[1746]: W0508 00:22:44.384883 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.385299 kubelet[1746]: E0508 00:22:44.385264 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.385763 kubelet[1746]: E0508 00:22:44.385505 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.385763 kubelet[1746]: W0508 00:22:44.385552 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.385763 kubelet[1746]: E0508 00:22:44.385589 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.386364 kubelet[1746]: E0508 00:22:44.386284 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.386458 kubelet[1746]: W0508 00:22:44.386442 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.386615 kubelet[1746]: E0508 00:22:44.386548 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.386760 kubelet[1746]: E0508 00:22:44.386726 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.386760 kubelet[1746]: W0508 00:22:44.386756 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.386826 kubelet[1746]: E0508 00:22:44.386783 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.387015 kubelet[1746]: E0508 00:22:44.386986 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.387015 kubelet[1746]: W0508 00:22:44.387000 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.387015 kubelet[1746]: E0508 00:22:44.387010 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.395876 kubelet[1746]: E0508 00:22:44.395845 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.395876 kubelet[1746]: W0508 00:22:44.395870 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.395943 kubelet[1746]: E0508 00:22:44.395887 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.471707 kubelet[1746]: E0508 00:22:44.471582 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.471707 kubelet[1746]: W0508 00:22:44.471605 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.471707 kubelet[1746]: E0508 00:22:44.471622 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.471897 kubelet[1746]: E0508 00:22:44.471841 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.471897 kubelet[1746]: W0508 00:22:44.471851 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.471897 kubelet[1746]: E0508 00:22:44.471861 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.472060 kubelet[1746]: E0508 00:22:44.472043 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.472060 kubelet[1746]: W0508 00:22:44.472056 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.472126 kubelet[1746]: E0508 00:22:44.472069 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.472247 kubelet[1746]: E0508 00:22:44.472235 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.472247 kubelet[1746]: W0508 00:22:44.472245 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.472302 kubelet[1746]: E0508 00:22:44.472257 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.472443 kubelet[1746]: E0508 00:22:44.472415 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.472443 kubelet[1746]: W0508 00:22:44.472427 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.472443 kubelet[1746]: E0508 00:22:44.472437 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.472591 kubelet[1746]: E0508 00:22:44.472569 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.472591 kubelet[1746]: W0508 00:22:44.472578 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.472591 kubelet[1746]: E0508 00:22:44.472586 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.472718 kubelet[1746]: E0508 00:22:44.472708 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.472718 kubelet[1746]: W0508 00:22:44.472717 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.472770 kubelet[1746]: E0508 00:22:44.472727 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.472896 kubelet[1746]: E0508 00:22:44.472885 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.472896 kubelet[1746]: W0508 00:22:44.472895 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.472948 kubelet[1746]: E0508 00:22:44.472906 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.473073 kubelet[1746]: E0508 00:22:44.473062 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.473073 kubelet[1746]: W0508 00:22:44.473072 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.473132 kubelet[1746]: E0508 00:22:44.473083 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.473284 kubelet[1746]: E0508 00:22:44.473271 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.473284 kubelet[1746]: W0508 00:22:44.473283 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.473376 kubelet[1746]: E0508 00:22:44.473347 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.473441 kubelet[1746]: E0508 00:22:44.473427 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.473441 kubelet[1746]: W0508 00:22:44.473438 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.473517 kubelet[1746]: E0508 00:22:44.473503 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.473572 kubelet[1746]: E0508 00:22:44.473561 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.473572 kubelet[1746]: W0508 00:22:44.473570 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.473653 kubelet[1746]: E0508 00:22:44.473634 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.473707 kubelet[1746]: E0508 00:22:44.473697 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.473707 kubelet[1746]: W0508 00:22:44.473705 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.473757 kubelet[1746]: E0508 00:22:44.473715 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.473963 kubelet[1746]: E0508 00:22:44.473940 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.473963 kubelet[1746]: W0508 00:22:44.473951 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.474034 kubelet[1746]: E0508 00:22:44.473972 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.474119 kubelet[1746]: E0508 00:22:44.474104 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.474119 kubelet[1746]: W0508 00:22:44.474119 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.474175 kubelet[1746]: E0508 00:22:44.474130 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.474274 kubelet[1746]: E0508 00:22:44.474254 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.474274 kubelet[1746]: W0508 00:22:44.474265 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.474274 kubelet[1746]: E0508 00:22:44.474273 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.474419 kubelet[1746]: E0508 00:22:44.474408 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.474419 kubelet[1746]: W0508 00:22:44.474419 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.474511 kubelet[1746]: E0508 00:22:44.474429 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.475000 kubelet[1746]: E0508 00:22:44.474987 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.475000 kubelet[1746]: W0508 00:22:44.475001 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.475142 kubelet[1746]: E0508 00:22:44.475060 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.475142 kubelet[1746]: E0508 00:22:44.475139 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.475194 kubelet[1746]: W0508 00:22:44.475147 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.475194 kubelet[1746]: E0508 00:22:44.475157 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.475304 kubelet[1746]: E0508 00:22:44.475295 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.475304 kubelet[1746]: W0508 00:22:44.475304 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.475348 kubelet[1746]: E0508 00:22:44.475312 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.475467 kubelet[1746]: E0508 00:22:44.475457 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.475499 kubelet[1746]: W0508 00:22:44.475467 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.475499 kubelet[1746]: E0508 00:22:44.475475 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.485485 kubelet[1746]: E0508 00:22:44.485454 1746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:44.485485 kubelet[1746]: W0508 00:22:44.485473 1746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:44.485485 kubelet[1746]: E0508 00:22:44.485485 1746 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:44.576261 kubelet[1746]: E0508 00:22:44.576121 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:44.580537 kubelet[1746]: E0508 00:22:44.580333 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:44.582135 containerd[1441]: time="2025-05-08T00:22:44.581668086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbr7g,Uid:942ad917-54f4-4f75-8f12-0b1cd1f1f20c,Namespace:kube-system,Attempt:0,}" May 8 00:22:44.582489 containerd[1441]: time="2025-05-08T00:22:44.582460875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27bqw,Uid:78f5beb4-29d2-41a1-bfe5-b1cf061196b2,Namespace:calico-system,Attempt:0,}" May 8 00:22:45.247095 kubelet[1746]: E0508 00:22:45.247043 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:45.296433 containerd[1441]: time="2025-05-08T00:22:45.296119124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:45.298163 containerd[1441]: time="2025-05-08T00:22:45.297734620Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:45.300075 containerd[1441]: time="2025-05-08T00:22:45.300031650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:22:45.300880 containerd[1441]: time="2025-05-08T00:22:45.300851793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:22:45.303843 containerd[1441]: time="2025-05-08T00:22:45.302664659Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:45.306216 containerd[1441]: time="2025-05-08T00:22:45.306170411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:45.307113 containerd[1441]: time="2025-05-08T00:22:45.307071202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 725.318649ms" May 8 00:22:45.309603 containerd[1441]: time="2025-05-08T00:22:45.309573228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 726.995895ms" May 8 00:22:45.385315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208060676.mount: Deactivated successfully. May 8 00:22:45.423465 containerd[1441]: time="2025-05-08T00:22:45.423216063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:45.423597 containerd[1441]: time="2025-05-08T00:22:45.423441201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:45.423597 containerd[1441]: time="2025-05-08T00:22:45.423453358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:45.423597 containerd[1441]: time="2025-05-08T00:22:45.423521213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:45.423924 containerd[1441]: time="2025-05-08T00:22:45.423842692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:45.423924 containerd[1441]: time="2025-05-08T00:22:45.423884446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:45.424707 containerd[1441]: time="2025-05-08T00:22:45.424581671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:45.424783 containerd[1441]: time="2025-05-08T00:22:45.424698193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:45.511154 systemd[1]: Started cri-containerd-18ce9b19c2740ba2709d8211b83bd81dae038ee8a22a3422b57d772897eda021.scope - libcontainer container 18ce9b19c2740ba2709d8211b83bd81dae038ee8a22a3422b57d772897eda021. May 8 00:22:45.512274 systemd[1]: Started cri-containerd-a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736.scope - libcontainer container a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736. May 8 00:22:45.538143 containerd[1441]: time="2025-05-08T00:22:45.538098172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27bqw,Uid:78f5beb4-29d2-41a1-bfe5-b1cf061196b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736\"" May 8 00:22:45.538246 containerd[1441]: time="2025-05-08T00:22:45.538235114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbr7g,Uid:942ad917-54f4-4f75-8f12-0b1cd1f1f20c,Namespace:kube-system,Attempt:0,} returns sandbox id \"18ce9b19c2740ba2709d8211b83bd81dae038ee8a22a3422b57d772897eda021\"" May 8 00:22:45.539246 kubelet[1746]: E0508 00:22:45.539219 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:45.539581 kubelet[1746]: E0508 00:22:45.539558 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:45.540716 containerd[1441]: time="2025-05-08T00:22:45.540687599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:22:46.247192 kubelet[1746]: E0508 00:22:46.247167 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:46.365188 kubelet[1746]: E0508 00:22:46.364901 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpp8d" podUID="188278af-6012-468a-b809-e6cd106f483f" May 8 00:22:46.523810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254676089.mount: Deactivated successfully. May 8 00:22:46.590133 containerd[1441]: time="2025-05-08T00:22:46.589741382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:46.591084 containerd[1441]: time="2025-05-08T00:22:46.591040882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6492223" May 8 00:22:46.591869 containerd[1441]: time="2025-05-08T00:22:46.591783675Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:46.594011 containerd[1441]: time="2025-05-08T00:22:46.593830659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:46.596960 containerd[1441]: time="2025-05-08T00:22:46.595532431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.054666639s" May 8 00:22:46.596960 containerd[1441]: time="2025-05-08T00:22:46.595573425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 00:22:46.597989 containerd[1441]: time="2025-05-08T00:22:46.597875125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:22:46.599449 containerd[1441]: time="2025-05-08T00:22:46.599398286Z" level=info msg="CreateContainer within sandbox \"a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:22:46.615142 containerd[1441]: time="2025-05-08T00:22:46.615080158Z" level=info msg="CreateContainer within sandbox \"a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67\"" May 8 00:22:46.615930 containerd[1441]: time="2025-05-08T00:22:46.615786171Z" level=info msg="StartContainer for \"ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67\"" May 8 00:22:46.646204 systemd[1]: Started cri-containerd-ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67.scope - libcontainer container ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67. May 8 00:22:46.667174 containerd[1441]: time="2025-05-08T00:22:46.667122818Z" level=info msg="StartContainer for \"ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67\" returns successfully" May 8 00:22:46.689196 systemd[1]: cri-containerd-ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67.scope: Deactivated successfully. May 8 00:22:46.729736 containerd[1441]: time="2025-05-08T00:22:46.729677184Z" level=info msg="shim disconnected" id=ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67 namespace=k8s.io May 8 00:22:46.729736 containerd[1441]: time="2025-05-08T00:22:46.729729630Z" level=warning msg="cleaning up after shim disconnected" id=ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67 namespace=k8s.io May 8 00:22:46.729736 containerd[1441]: time="2025-05-08T00:22:46.729739809Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:22:47.248562 kubelet[1746]: E0508 00:22:47.248517 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:47.375020 kubelet[1746]: E0508 00:22:47.374933 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:47.504164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddcadee6665d2a5dc834fc45d7b569cf5286a77804481709b7fef6d92e1c8d67-rootfs.mount: Deactivated successfully. May 8 00:22:47.590487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619043407.mount: Deactivated successfully. May 8 00:22:47.794269 containerd[1441]: time="2025-05-08T00:22:47.794148478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:47.794673 containerd[1441]: time="2025-05-08T00:22:47.794628127Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 8 00:22:47.795529 containerd[1441]: time="2025-05-08T00:22:47.795480028Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:47.797281 containerd[1441]: time="2025-05-08T00:22:47.797249923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:47.798161 containerd[1441]: time="2025-05-08T00:22:47.798093825Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.200175663s" May 8 00:22:47.798161 containerd[1441]: time="2025-05-08T00:22:47.798124942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:22:47.799187 containerd[1441]: time="2025-05-08T00:22:47.799052726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:22:47.800053 containerd[1441]: time="2025-05-08T00:22:47.800019863Z" level=info msg="CreateContainer within sandbox \"18ce9b19c2740ba2709d8211b83bd81dae038ee8a22a3422b57d772897eda021\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:22:47.810776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932867806.mount: Deactivated successfully. May 8 00:22:47.813571 containerd[1441]: time="2025-05-08T00:22:47.813527355Z" level=info msg="CreateContainer within sandbox \"18ce9b19c2740ba2709d8211b83bd81dae038ee8a22a3422b57d772897eda021\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"998ae002a7b6f097662f7f362afa08dc1b6a838733d3f8fac557fbb6046fe1b2\"" May 8 00:22:47.814340 containerd[1441]: time="2025-05-08T00:22:47.814231191Z" level=info msg="StartContainer for \"998ae002a7b6f097662f7f362afa08dc1b6a838733d3f8fac557fbb6046fe1b2\"" May 8 00:22:47.846165 systemd[1]: Started cri-containerd-998ae002a7b6f097662f7f362afa08dc1b6a838733d3f8fac557fbb6046fe1b2.scope - libcontainer container 998ae002a7b6f097662f7f362afa08dc1b6a838733d3f8fac557fbb6046fe1b2. May 8 00:22:47.866592 containerd[1441]: time="2025-05-08T00:22:47.866547609Z" level=info msg="StartContainer for \"998ae002a7b6f097662f7f362afa08dc1b6a838733d3f8fac557fbb6046fe1b2\" returns successfully" May 8 00:22:48.249222 kubelet[1746]: E0508 00:22:48.249096 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:48.364862 kubelet[1746]: E0508 00:22:48.364806 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpp8d" podUID="188278af-6012-468a-b809-e6cd106f483f" May 8 00:22:48.377729 kubelet[1746]: E0508 00:22:48.377395 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:49.249300 kubelet[1746]: E0508 00:22:49.249231 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:49.379751 kubelet[1746]: E0508 00:22:49.379676 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:49.885765 containerd[1441]: time="2025-05-08T00:22:49.885720657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:49.886655 containerd[1441]: time="2025-05-08T00:22:49.886511014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 8 00:22:49.887398 containerd[1441]: time="2025-05-08T00:22:49.887340135Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:49.889395 containerd[1441]: time="2025-05-08T00:22:49.889343555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:49.890257 containerd[1441]: time="2025-05-08T00:22:49.890126382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.091042128s" May 8 00:22:49.890257 containerd[1441]: time="2025-05-08T00:22:49.890157656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 00:22:49.892140 containerd[1441]: time="2025-05-08T00:22:49.892111675Z" level=info msg="CreateContainer within sandbox \"a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:22:49.902669 containerd[1441]: time="2025-05-08T00:22:49.902626146Z" level=info msg="CreateContainer within sandbox \"a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf\"" May 8 00:22:49.903252 containerd[1441]: time="2025-05-08T00:22:49.903207140Z" level=info msg="StartContainer for \"30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf\"" May 8 00:22:49.937128 systemd[1]: Started cri-containerd-30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf.scope - libcontainer container 30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf. May 8 00:22:49.960792 containerd[1441]: time="2025-05-08T00:22:49.960743188Z" level=info msg="StartContainer for \"30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf\" returns successfully" May 8 00:22:50.250149 kubelet[1746]: E0508 00:22:50.250030 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:50.365746 kubelet[1746]: E0508 00:22:50.365617 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpp8d" podUID="188278af-6012-468a-b809-e6cd106f483f" May 8 00:22:50.383942 kubelet[1746]: E0508 00:22:50.383913 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:50.399389 kubelet[1746]: I0508 00:22:50.397162 1746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zbr7g" podStartSLOduration=6.138710009 podStartE2EDuration="8.397146314s" podCreationTimestamp="2025-05-08 00:22:42 +0000 UTC" firstStartedPulling="2025-05-08 00:22:45.540483556 +0000 UTC m=+4.178716107" lastFinishedPulling="2025-05-08 00:22:47.798919861 +0000 UTC m=+6.437152412" observedRunningTime="2025-05-08 00:22:48.384750801 +0000 UTC m=+7.022983352" watchObservedRunningTime="2025-05-08 00:22:50.397146314 +0000 UTC m=+9.035378865" May 8 00:22:50.401330 systemd[1]: cri-containerd-30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf.scope: Deactivated successfully. May 8 00:22:50.416696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf-rootfs.mount: Deactivated successfully. May 8 00:22:50.430248 kubelet[1746]: I0508 00:22:50.430212 1746 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:22:50.625744 containerd[1441]: time="2025-05-08T00:22:50.625616206Z" level=info msg="shim disconnected" id=30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf namespace=k8s.io May 8 00:22:50.625744 containerd[1441]: time="2025-05-08T00:22:50.625670017Z" level=warning msg="cleaning up after shim disconnected" id=30d3fde7b6cc3a7a15902c7ce4a30c3ea70e5672be7cf19c5bf8f969b25391cf namespace=k8s.io May 8 00:22:50.625744 containerd[1441]: time="2025-05-08T00:22:50.625694133Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:22:51.251175 kubelet[1746]: E0508 00:22:51.251133 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:51.386931 kubelet[1746]: E0508 00:22:51.386901 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:51.387630 containerd[1441]: time="2025-05-08T00:22:51.387602511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:22:52.251476 kubelet[1746]: E0508 00:22:52.251426 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:52.377892 systemd[1]: Created slice kubepods-besteffort-pod188278af_6012_468a_b809_e6cd106f483f.slice - libcontainer container kubepods-besteffort-pod188278af_6012_468a_b809_e6cd106f483f.slice. May 8 00:22:52.382332 containerd[1441]: time="2025-05-08T00:22:52.382171413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpp8d,Uid:188278af-6012-468a-b809-e6cd106f483f,Namespace:calico-system,Attempt:0,}" May 8 00:22:52.521558 containerd[1441]: time="2025-05-08T00:22:52.521415343Z" level=error msg="Failed to destroy network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:52.522941 containerd[1441]: time="2025-05-08T00:22:52.522890153Z" level=error msg="encountered an error cleaning up failed sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:52.523022 containerd[1441]: time="2025-05-08T00:22:52.522969580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpp8d,Uid:188278af-6012-468a-b809-e6cd106f483f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:52.523442 kubelet[1746]: E0508 00:22:52.523398 1746 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:52.523493 kubelet[1746]: E0508 00:22:52.523470 1746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpp8d" May 8 00:22:52.523519 kubelet[1746]: E0508 00:22:52.523489 1746 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpp8d" May 8 00:22:52.523498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae-shm.mount: Deactivated successfully. May 8 00:22:52.523612 kubelet[1746]: E0508 00:22:52.523554 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpp8d_calico-system(188278af-6012-468a-b809-e6cd106f483f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpp8d_calico-system(188278af-6012-468a-b809-e6cd106f483f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpp8d" podUID="188278af-6012-468a-b809-e6cd106f483f" May 8 00:22:53.251588 kubelet[1746]: E0508 00:22:53.251556 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:53.325600 kubelet[1746]: I0508 00:22:53.325559 1746 topology_manager.go:215] "Topology Admit Handler" podUID="ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6" podNamespace="default" podName="nginx-deployment-85f456d6dd-n6fwg" May 8 00:22:53.332137 kubelet[1746]: I0508 00:22:53.332103 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q8pf\" (UniqueName: \"kubernetes.io/projected/ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6-kube-api-access-4q8pf\") pod \"nginx-deployment-85f456d6dd-n6fwg\" (UID: \"ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6\") " pod="default/nginx-deployment-85f456d6dd-n6fwg" May 8 00:22:53.335734 systemd[1]: Created slice kubepods-besteffort-podff5f541a_02e9_4632_9a4c_7bbd8a4c74f6.slice - libcontainer container kubepods-besteffort-podff5f541a_02e9_4632_9a4c_7bbd8a4c74f6.slice. May 8 00:22:53.390751 kubelet[1746]: I0508 00:22:53.390697 1746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" May 8 00:22:53.391645 containerd[1441]: time="2025-05-08T00:22:53.391564219Z" level=info msg="StopPodSandbox for \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\"" May 8 00:22:53.391809 containerd[1441]: time="2025-05-08T00:22:53.391775763Z" level=info msg="Ensure that sandbox 3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae in task-service has been cleanup successfully" May 8 00:22:53.416112 containerd[1441]: time="2025-05-08T00:22:53.416064665Z" level=error msg="StopPodSandbox for \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\" failed" error="failed to destroy network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:53.416343 kubelet[1746]: E0508 00:22:53.416265 1746 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" May 8 00:22:53.416400 kubelet[1746]: E0508 00:22:53.416320 1746 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae"} May 8 00:22:53.416400 kubelet[1746]: E0508 00:22:53.416377 1746 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"188278af-6012-468a-b809-e6cd106f483f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:53.416546 kubelet[1746]: E0508 00:22:53.416398 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"188278af-6012-468a-b809-e6cd106f483f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpp8d" podUID="188278af-6012-468a-b809-e6cd106f483f" May 8 00:22:53.639134 containerd[1441]: time="2025-05-08T00:22:53.639029237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-n6fwg,Uid:ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6,Namespace:default,Attempt:0,}" May 8 00:22:53.801890 containerd[1441]: time="2025-05-08T00:22:53.801266028Z" level=error msg="Failed to destroy network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:53.801890 containerd[1441]: time="2025-05-08T00:22:53.801614611Z" level=error msg="encountered an error cleaning up failed sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:53.801890 containerd[1441]: time="2025-05-08T00:22:53.801665931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-n6fwg,Uid:ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:53.802098 kubelet[1746]: E0508 00:22:53.801899 1746 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:53.802098 kubelet[1746]: E0508 00:22:53.801971 1746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-n6fwg" May 8 00:22:53.802098 kubelet[1746]: E0508 00:22:53.801992 1746 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-n6fwg" May 8 00:22:53.802186 kubelet[1746]: E0508 00:22:53.802036 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-n6fwg_default(ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-n6fwg_default(ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-n6fwg" podUID="ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6" May 8 00:22:53.803270 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50-shm.mount: Deactivated successfully. May 8 00:22:54.190197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847321191.mount: Deactivated successfully. May 8 00:22:54.216808 containerd[1441]: time="2025-05-08T00:22:54.216758234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:54.217371 containerd[1441]: time="2025-05-08T00:22:54.217335331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 8 00:22:54.218222 containerd[1441]: time="2025-05-08T00:22:54.218189301Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:54.220084 containerd[1441]: time="2025-05-08T00:22:54.220035598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:54.220880 containerd[1441]: time="2025-05-08T00:22:54.220642834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.833003147s" May 8 00:22:54.220880 containerd[1441]: time="2025-05-08T00:22:54.220676644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 00:22:54.227036 containerd[1441]: time="2025-05-08T00:22:54.227005875Z" level=info msg="CreateContainer within sandbox \"a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:22:54.238253 containerd[1441]: time="2025-05-08T00:22:54.238198859Z" level=info msg="CreateContainer within sandbox \"a1f5eb305ea07fffe9c31034372cf6b684fa74c6bc857f47e7c4625eb9c62736\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1ea5923161036561f846a93e9b0d913034b9d9d8e401ebb6990464a3e1700d2d\"" May 8 00:22:54.240021 containerd[1441]: time="2025-05-08T00:22:54.238807731Z" level=info msg="StartContainer for \"1ea5923161036561f846a93e9b0d913034b9d9d8e401ebb6990464a3e1700d2d\"" May 8 00:22:54.252761 kubelet[1746]: E0508 00:22:54.252703 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:54.267126 systemd[1]: Started cri-containerd-1ea5923161036561f846a93e9b0d913034b9d9d8e401ebb6990464a3e1700d2d.scope - libcontainer container 1ea5923161036561f846a93e9b0d913034b9d9d8e401ebb6990464a3e1700d2d. May 8 00:22:54.288720 containerd[1441]: time="2025-05-08T00:22:54.288680375Z" level=info msg="StartContainer for \"1ea5923161036561f846a93e9b0d913034b9d9d8e401ebb6990464a3e1700d2d\" returns successfully" May 8 00:22:54.398560 kubelet[1746]: E0508 00:22:54.398150 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:54.399626 kubelet[1746]: I0508 00:22:54.399301 1746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" May 8 00:22:54.399727 containerd[1441]: time="2025-05-08T00:22:54.399676529Z" level=info msg="StopPodSandbox for \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\"" May 8 00:22:54.399869 containerd[1441]: time="2025-05-08T00:22:54.399837399Z" level=info msg="Ensure that sandbox 1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50 in task-service has been cleanup successfully" May 8 00:22:54.429148 containerd[1441]: time="2025-05-08T00:22:54.429081075Z" level=error msg="StopPodSandbox for \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\" failed" error="failed to destroy network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:54.429388 kubelet[1746]: E0508 00:22:54.429331 1746 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" May 8 00:22:54.429449 kubelet[1746]: E0508 00:22:54.429390 1746 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50"} May 8 00:22:54.429449 kubelet[1746]: E0508 00:22:54.429427 1746 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:54.429540 kubelet[1746]: E0508 00:22:54.429450 1746 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-n6fwg" podUID="ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6" May 8 00:22:54.511870 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:22:54.512013 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:22:55.253019 kubelet[1746]: E0508 00:22:55.252964 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:55.400935 kubelet[1746]: I0508 00:22:55.400901 1746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:22:55.401678 kubelet[1746]: E0508 00:22:55.401643 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:55.889999 kernel: bpftool[2578]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:22:56.039740 systemd-networkd[1379]: vxlan.calico: Link UP May 8 00:22:56.039744 systemd-networkd[1379]: vxlan.calico: Gained carrier May 8 00:22:56.254161 kubelet[1746]: E0508 00:22:56.254010 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:57.254720 kubelet[1746]: E0508 00:22:57.254651 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:57.939116 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL May 8 00:22:58.255798 kubelet[1746]: E0508 00:22:58.255678 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:22:59.256799 kubelet[1746]: E0508 00:22:59.256748 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:00.257409 kubelet[1746]: E0508 00:23:00.257365 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:01.258415 kubelet[1746]: E0508 00:23:01.258368 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:02.245565 kubelet[1746]: E0508 00:23:02.245524 1746 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:02.258876 kubelet[1746]: E0508 00:23:02.258848 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:03.259832 kubelet[1746]: E0508 00:23:03.259764 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:04.259907 kubelet[1746]: E0508 00:23:04.259862 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:05.260017 kubelet[1746]: E0508 00:23:05.259972 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:05.365834 containerd[1441]: time="2025-05-08T00:23:05.365781846Z" level=info msg="StopPodSandbox for \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\"" May 8 00:23:05.408799 kubelet[1746]: I0508 00:23:05.408745 1746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-27bqw" podStartSLOduration=14.727588654 podStartE2EDuration="23.408719836s" podCreationTimestamp="2025-05-08 00:22:42 +0000 UTC" firstStartedPulling="2025-05-08 00:22:45.540253571 +0000 UTC m=+4.178486122" lastFinishedPulling="2025-05-08 00:22:54.221384753 +0000 UTC m=+12.859617304" observedRunningTime="2025-05-08 00:22:54.419384305 +0000 UTC m=+13.057616856" watchObservedRunningTime="2025-05-08 00:23:05.408719836 +0000 UTC m=+24.046952387" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.408 [INFO][2686] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.408 [INFO][2686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" iface="eth0" netns="/var/run/netns/cni-54b27fcf-68cf-331b-99e2-0c2daa8c8390" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.408 [INFO][2686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" iface="eth0" netns="/var/run/netns/cni-54b27fcf-68cf-331b-99e2-0c2daa8c8390" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.409 [INFO][2686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" iface="eth0" netns="/var/run/netns/cni-54b27fcf-68cf-331b-99e2-0c2daa8c8390" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.409 [INFO][2686] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.409 [INFO][2686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.469 [INFO][2694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" HandleID="k8s-pod-network.3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" Workload="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.469 [INFO][2694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.469 [INFO][2694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.478 [WARNING][2694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" HandleID="k8s-pod-network.3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" Workload="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.478 [INFO][2694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" HandleID="k8s-pod-network.3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" Workload="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.479 [INFO][2694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:05.482817 containerd[1441]: 2025-05-08 00:23:05.481 [INFO][2686] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae" May 8 00:23:05.483361 containerd[1441]: time="2025-05-08T00:23:05.482986353Z" level=info msg="TearDown network for sandbox \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\" successfully" May 8 00:23:05.483361 containerd[1441]: time="2025-05-08T00:23:05.483017356Z" level=info msg="StopPodSandbox for \"3407a8fc0f5b596551ca45c39f2b5c14dbe53da8495a10bc51e4e834c7b134ae\" returns successfully" May 8 00:23:05.484062 containerd[1441]: time="2025-05-08T00:23:05.483663313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpp8d,Uid:188278af-6012-468a-b809-e6cd106f483f,Namespace:calico-system,Attempt:1,}" May 8 00:23:05.485047 systemd[1]: run-netns-cni\x2d54b27fcf\x2d68cf\x2d331b\x2d99e2\x2d0c2daa8c8390.mount: Deactivated successfully. May 8 00:23:05.585171 systemd-networkd[1379]: calie818a1b21d3: Link UP May 8 00:23:05.585741 systemd-networkd[1379]: calie818a1b21d3: Gained carrier May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.524 [INFO][2703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.65-k8s-csi--node--driver--wpp8d-eth0 csi-node-driver- calico-system 188278af-6012-468a-b809-e6cd106f483f 1008 0 2025-05-08 00:22:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.65 csi-node-driver-wpp8d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie818a1b21d3 [] []}} ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.524 [INFO][2703] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.548 [INFO][2717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" HandleID="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Workload="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.559 [INFO][2717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" HandleID="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Workload="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b700), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.65", "pod":"csi-node-driver-wpp8d", "timestamp":"2025-05-08 00:23:05.548457023 +0000 UTC"}, Hostname:"10.0.0.65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.559 [INFO][2717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.560 [INFO][2717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.560 [INFO][2717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.65' May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.561 [INFO][2717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.565 [INFO][2717] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.568 [INFO][2717] ipam/ipam.go 489: Trying affinity for 192.168.14.64/26 host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.570 [INFO][2717] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.572 [INFO][2717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.572 [INFO][2717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.573 [INFO][2717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.577 [INFO][2717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.581 [INFO][2717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.65/26] block=192.168.14.64/26 handle="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.581 [INFO][2717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.65/26] handle="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" host="10.0.0.65" May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.581 [INFO][2717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:05.594451 containerd[1441]: 2025-05-08 00:23:05.581 [INFO][2717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.65/26] IPv6=[] ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" HandleID="k8s-pod-network.61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Workload="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.594991 containerd[1441]: 2025-05-08 00:23:05.583 [INFO][2703] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-csi--node--driver--wpp8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"188278af-6012-468a-b809-e6cd106f483f", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"", Pod:"csi-node-driver-wpp8d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie818a1b21d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:05.594991 containerd[1441]: 2025-05-08 00:23:05.583 [INFO][2703] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.65/32] ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.594991 containerd[1441]: 2025-05-08 00:23:05.583 [INFO][2703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie818a1b21d3 ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.594991 containerd[1441]: 2025-05-08 00:23:05.584 [INFO][2703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.594991 containerd[1441]: 2025-05-08 00:23:05.584 [INFO][2703] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-csi--node--driver--wpp8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"188278af-6012-468a-b809-e6cd106f483f", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a", Pod:"csi-node-driver-wpp8d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie818a1b21d3", MAC:"fe:2d:fd:21:8a:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:05.594991 containerd[1441]: 2025-05-08 00:23:05.592 [INFO][2703] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a" Namespace="calico-system" Pod="csi-node-driver-wpp8d" WorkloadEndpoint="10.0.0.65-k8s-csi--node--driver--wpp8d-eth0" May 8 00:23:05.609799 containerd[1441]: time="2025-05-08T00:23:05.609442600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:23:05.609799 containerd[1441]: time="2025-05-08T00:23:05.609494966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:23:05.609799 containerd[1441]: time="2025-05-08T00:23:05.609510248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:05.609799 containerd[1441]: time="2025-05-08T00:23:05.609572855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:05.629130 systemd[1]: Started cri-containerd-61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a.scope - libcontainer container 61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a. May 8 00:23:05.637641 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:23:05.646672 containerd[1441]: time="2025-05-08T00:23:05.646637266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpp8d,Uid:188278af-6012-468a-b809-e6cd106f483f,Namespace:calico-system,Attempt:1,} returns sandbox id \"61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a\"" May 8 00:23:05.648548 containerd[1441]: time="2025-05-08T00:23:05.648521810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:23:06.260592 kubelet[1746]: E0508 00:23:06.260539 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:06.365893 containerd[1441]: time="2025-05-08T00:23:06.365620970Z" level=info msg="StopPodSandbox for \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\"" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.402 [INFO][2801] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.402 [INFO][2801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" iface="eth0" netns="/var/run/netns/cni-4cac7d15-5b0f-8aa5-0390-a64accd367c4" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.402 [INFO][2801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" iface="eth0" netns="/var/run/netns/cni-4cac7d15-5b0f-8aa5-0390-a64accd367c4" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.403 [INFO][2801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" iface="eth0" netns="/var/run/netns/cni-4cac7d15-5b0f-8aa5-0390-a64accd367c4" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.403 [INFO][2801] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.403 [INFO][2801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.419 [INFO][2810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" HandleID="k8s-pod-network.1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" Workload="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.419 [INFO][2810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.419 [INFO][2810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.427 [WARNING][2810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" HandleID="k8s-pod-network.1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" Workload="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.427 [INFO][2810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" HandleID="k8s-pod-network.1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" Workload="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.428 [INFO][2810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:06.437776 containerd[1441]: 2025-05-08 00:23:06.430 [INFO][2801] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50" May 8 00:23:06.440933 containerd[1441]: time="2025-05-08T00:23:06.440762895Z" level=info msg="TearDown network for sandbox \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\" successfully" May 8 00:23:06.440933 containerd[1441]: time="2025-05-08T00:23:06.440793738Z" level=info msg="StopPodSandbox for \"1ee8f9c53d20d2d0fc64ece823b731de33a802b32b7be8c3a9de67cfa734ae50\" returns successfully" May 8 00:23:06.441743 containerd[1441]: time="2025-05-08T00:23:06.441403727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-n6fwg,Uid:ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6,Namespace:default,Attempt:1,}" May 8 00:23:06.487158 systemd[1]: run-netns-cni\x2d4cac7d15\x2d5b0f\x2d8aa5\x2d0390\x2da64accd367c4.mount: Deactivated successfully. May 8 00:23:06.555558 containerd[1441]: time="2025-05-08T00:23:06.555427101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:06.557010 containerd[1441]: time="2025-05-08T00:23:06.556966874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 8 00:23:06.557868 containerd[1441]: time="2025-05-08T00:23:06.557827491Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:06.560839 containerd[1441]: time="2025-05-08T00:23:06.560796665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:06.561426 containerd[1441]: time="2025-05-08T00:23:06.561362608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 912.807154ms" May 8 00:23:06.561426 containerd[1441]: time="2025-05-08T00:23:06.561391171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 00:23:06.563833 containerd[1441]: time="2025-05-08T00:23:06.563803043Z" level=info msg="CreateContainer within sandbox \"61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:23:06.577435 containerd[1441]: time="2025-05-08T00:23:06.577379848Z" level=info msg="CreateContainer within sandbox \"61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d6b73e702b7c5808e088ff921e3f70ec8f63636149a0836c3cb91b6cb5a80a0c\"" May 8 00:23:06.577889 containerd[1441]: time="2025-05-08T00:23:06.577857982Z" level=info msg="StartContainer for \"d6b73e702b7c5808e088ff921e3f70ec8f63636149a0836c3cb91b6cb5a80a0c\"" May 8 00:23:06.578614 systemd-networkd[1379]: cali85a5da6b756: Link UP May 8 00:23:06.579118 systemd-networkd[1379]: cali85a5da6b756: Gained carrier May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.508 [INFO][2826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0 nginx-deployment-85f456d6dd- default ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6 1016 0 2025-05-08 00:22:53 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.65 nginx-deployment-85f456d6dd-n6fwg eth0 default [] [] [kns.default ksa.default.default] cali85a5da6b756 [] []}} ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.508 [INFO][2826] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.535 [INFO][2841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" HandleID="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Workload="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.548 [INFO][2841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" HandleID="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Workload="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360a30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.65", "pod":"nginx-deployment-85f456d6dd-n6fwg", "timestamp":"2025-05-08 00:23:06.535692523 +0000 UTC"}, Hostname:"10.0.0.65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.548 [INFO][2841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.548 [INFO][2841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.548 [INFO][2841] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.65' May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.550 [INFO][2841] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.553 [INFO][2841] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.558 [INFO][2841] ipam/ipam.go 489: Trying affinity for 192.168.14.64/26 host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.560 [INFO][2841] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.562 [INFO][2841] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.562 [INFO][2841] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.564 [INFO][2841] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660 May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.568 [INFO][2841] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.573 [INFO][2841] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.66/26] block=192.168.14.64/26 handle="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.573 [INFO][2841] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.66/26] handle="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" host="10.0.0.65" May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.573 [INFO][2841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:06.588324 containerd[1441]: 2025-05-08 00:23:06.573 [INFO][2841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.66/26] IPv6=[] ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" HandleID="k8s-pod-network.b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Workload="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.591232 containerd[1441]: 2025-05-08 00:23:06.575 [INFO][2826] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-n6fwg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali85a5da6b756", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:06.591232 containerd[1441]: 2025-05-08 00:23:06.576 [INFO][2826] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.66/32] ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.591232 containerd[1441]: 2025-05-08 00:23:06.576 [INFO][2826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85a5da6b756 ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.591232 containerd[1441]: 2025-05-08 00:23:06.579 [INFO][2826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.591232 containerd[1441]: 2025-05-08 00:23:06.579 [INFO][2826] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660", Pod:"nginx-deployment-85f456d6dd-n6fwg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali85a5da6b756", MAC:"c2:85:77:60:03:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:06.591232 containerd[1441]: 2025-05-08 00:23:06.586 [INFO][2826] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660" Namespace="default" Pod="nginx-deployment-85f456d6dd-n6fwg" WorkloadEndpoint="10.0.0.65-k8s-nginx--deployment--85f456d6dd--n6fwg-eth0" May 8 00:23:06.609121 systemd[1]: Started cri-containerd-d6b73e702b7c5808e088ff921e3f70ec8f63636149a0836c3cb91b6cb5a80a0c.scope - libcontainer container d6b73e702b7c5808e088ff921e3f70ec8f63636149a0836c3cb91b6cb5a80a0c. May 8 00:23:06.612482 containerd[1441]: time="2025-05-08T00:23:06.612109791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:23:06.612482 containerd[1441]: time="2025-05-08T00:23:06.612212763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:23:06.612482 containerd[1441]: time="2025-05-08T00:23:06.612240046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:06.612482 containerd[1441]: time="2025-05-08T00:23:06.612345338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:06.629127 systemd[1]: Started cri-containerd-b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660.scope - libcontainer container b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660. May 8 00:23:06.636934 containerd[1441]: time="2025-05-08T00:23:06.636894857Z" level=info msg="StartContainer for \"d6b73e702b7c5808e088ff921e3f70ec8f63636149a0836c3cb91b6cb5a80a0c\" returns successfully" May 8 00:23:06.638179 containerd[1441]: time="2025-05-08T00:23:06.638147918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:23:06.643208 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:23:06.670731 containerd[1441]: time="2025-05-08T00:23:06.670615006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-n6fwg,Uid:ff5f541a-02e9-4632-9a4c-7bbd8a4c74f6,Namespace:default,Attempt:1,} returns sandbox id \"b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660\"" May 8 00:23:07.260715 kubelet[1746]: E0508 00:23:07.260651 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:07.283129 systemd-networkd[1379]: calie818a1b21d3: Gained IPv6LL May 8 00:23:07.602749 containerd[1441]: time="2025-05-08T00:23:07.602692551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:07.603710 containerd[1441]: time="2025-05-08T00:23:07.603642772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 8 00:23:07.604452 containerd[1441]: time="2025-05-08T00:23:07.604425455Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:07.606656 containerd[1441]: time="2025-05-08T00:23:07.606616087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:07.607487 containerd[1441]: time="2025-05-08T00:23:07.607449136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 969.266694ms" May 8 00:23:07.607526 containerd[1441]: time="2025-05-08T00:23:07.607482619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 00:23:07.608864 containerd[1441]: time="2025-05-08T00:23:07.608838803Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:23:07.609738 containerd[1441]: time="2025-05-08T00:23:07.609701695Z" level=info msg="CreateContainer within sandbox \"61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:23:07.620778 containerd[1441]: time="2025-05-08T00:23:07.620732586Z" level=info msg="CreateContainer within sandbox \"61c0487353c06478aeaca1b1d59702c03a642fb3fa823af1a69098a891adcb0a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"73d189ba105a08fa8098359ef12986b06693addcb3ec6100d611bd3e4754c968\"" May 8 00:23:07.621338 containerd[1441]: time="2025-05-08T00:23:07.621296366Z" level=info msg="StartContainer for \"73d189ba105a08fa8098359ef12986b06693addcb3ec6100d611bd3e4754c968\"" May 8 00:23:07.648131 systemd[1]: Started cri-containerd-73d189ba105a08fa8098359ef12986b06693addcb3ec6100d611bd3e4754c968.scope - libcontainer container 73d189ba105a08fa8098359ef12986b06693addcb3ec6100d611bd3e4754c968. May 8 00:23:07.669160 containerd[1441]: time="2025-05-08T00:23:07.669120324Z" level=info msg="StartContainer for \"73d189ba105a08fa8098359ef12986b06693addcb3ec6100d611bd3e4754c968\" returns successfully" May 8 00:23:08.051215 systemd-networkd[1379]: cali85a5da6b756: Gained IPv6LL May 8 00:23:08.261275 kubelet[1746]: E0508 00:23:08.261233 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:08.386195 kubelet[1746]: I0508 00:23:08.386163 1746 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:23:08.386195 kubelet[1746]: I0508 00:23:08.386197 1746 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:23:08.443446 kubelet[1746]: I0508 00:23:08.443386 1746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wpp8d" podStartSLOduration=24.483389098 podStartE2EDuration="26.443368659s" podCreationTimestamp="2025-05-08 00:22:42 +0000 UTC" firstStartedPulling="2025-05-08 00:23:05.648206773 +0000 UTC m=+24.286439324" lastFinishedPulling="2025-05-08 00:23:07.608186334 +0000 UTC m=+26.246418885" observedRunningTime="2025-05-08 00:23:08.442740356 +0000 UTC m=+27.080972907" watchObservedRunningTime="2025-05-08 00:23:08.443368659 +0000 UTC m=+27.081601210" May 8 00:23:09.261611 kubelet[1746]: E0508 00:23:09.261555 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:09.729817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569535789.mount: Deactivated successfully. May 8 00:23:10.261763 kubelet[1746]: E0508 00:23:10.261701 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:10.464695 containerd[1441]: time="2025-05-08T00:23:10.464638296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:10.465418 containerd[1441]: time="2025-05-08T00:23:10.465346480Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 8 00:23:10.466688 containerd[1441]: time="2025-05-08T00:23:10.466650397Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:10.469144 containerd[1441]: time="2025-05-08T00:23:10.469078295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:10.470250 containerd[1441]: time="2025-05-08T00:23:10.470150551Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.861278664s" May 8 00:23:10.470250 containerd[1441]: time="2025-05-08T00:23:10.470183354Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 8 00:23:10.472104 containerd[1441]: time="2025-05-08T00:23:10.472077124Z" level=info msg="CreateContainer within sandbox \"b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 8 00:23:10.483320 containerd[1441]: time="2025-05-08T00:23:10.483268490Z" level=info msg="CreateContainer within sandbox \"b2e0d447a9b74fd922a7343ae16b337e8ce927c648c283669b3141d622e6b660\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"261062d035e0f5d6b7c628538ee4d12def643f7671816ac4bce007afd3832377\"" May 8 00:23:10.483766 containerd[1441]: time="2025-05-08T00:23:10.483735292Z" level=info msg="StartContainer for \"261062d035e0f5d6b7c628538ee4d12def643f7671816ac4bce007afd3832377\"" May 8 00:23:10.557176 systemd[1]: Started cri-containerd-261062d035e0f5d6b7c628538ee4d12def643f7671816ac4bce007afd3832377.scope - libcontainer container 261062d035e0f5d6b7c628538ee4d12def643f7671816ac4bce007afd3832377. May 8 00:23:10.617971 containerd[1441]: time="2025-05-08T00:23:10.617907863Z" level=info msg="StartContainer for \"261062d035e0f5d6b7c628538ee4d12def643f7671816ac4bce007afd3832377\" returns successfully" May 8 00:23:11.262682 kubelet[1746]: E0508 00:23:11.262635 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:11.459882 kubelet[1746]: I0508 00:23:11.459820 1746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-n6fwg" podStartSLOduration=14.660458512 podStartE2EDuration="18.459805538s" podCreationTimestamp="2025-05-08 00:22:53 +0000 UTC" firstStartedPulling="2025-05-08 00:23:06.671621439 +0000 UTC m=+25.309853950" lastFinishedPulling="2025-05-08 00:23:10.470968465 +0000 UTC m=+29.109200976" observedRunningTime="2025-05-08 00:23:11.459773895 +0000 UTC m=+30.098006446" watchObservedRunningTime="2025-05-08 00:23:11.459805538 +0000 UTC m=+30.098038089" May 8 00:23:12.263209 kubelet[1746]: E0508 00:23:12.263163 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:12.538344 kubelet[1746]: I0508 00:23:12.537276 1746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:23:12.538344 kubelet[1746]: E0508 00:23:12.538009 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:23:13.264255 kubelet[1746]: E0508 00:23:13.264205 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:13.443219 kubelet[1746]: E0508 00:23:13.443181 1746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:23:14.264755 kubelet[1746]: E0508 00:23:14.264702 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:15.265547 kubelet[1746]: E0508 00:23:15.265476 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:16.266360 kubelet[1746]: E0508 00:23:16.266303 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:16.372539 kubelet[1746]: I0508 00:23:16.372493 1746 topology_manager.go:215] "Topology Admit Handler" podUID="8dcbf900-485e-469a-b9af-a54de0d1dd17" podNamespace="default" podName="nfs-server-provisioner-0" May 8 00:23:16.378487 systemd[1]: Created slice kubepods-besteffort-pod8dcbf900_485e_469a_b9af_a54de0d1dd17.slice - libcontainer container kubepods-besteffort-pod8dcbf900_485e_469a_b9af_a54de0d1dd17.slice. May 8 00:23:16.460987 kubelet[1746]: I0508 00:23:16.460933 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8dcbf900-485e-469a-b9af-a54de0d1dd17-data\") pod \"nfs-server-provisioner-0\" (UID: \"8dcbf900-485e-469a-b9af-a54de0d1dd17\") " pod="default/nfs-server-provisioner-0" May 8 00:23:16.460987 kubelet[1746]: I0508 00:23:16.460991 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8t5\" (UniqueName: \"kubernetes.io/projected/8dcbf900-485e-469a-b9af-a54de0d1dd17-kube-api-access-gg8t5\") pod \"nfs-server-provisioner-0\" (UID: \"8dcbf900-485e-469a-b9af-a54de0d1dd17\") " pod="default/nfs-server-provisioner-0" May 8 00:23:16.681084 containerd[1441]: time="2025-05-08T00:23:16.681030691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8dcbf900-485e-469a-b9af-a54de0d1dd17,Namespace:default,Attempt:0,}" May 8 00:23:16.853706 systemd-networkd[1379]: cali60e51b789ff: Link UP May 8 00:23:16.853900 systemd-networkd[1379]: cali60e51b789ff: Gained carrier May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.790 [INFO][3127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.65-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 8dcbf900-485e-469a-b9af-a54de0d1dd17 1081 0 2025-05-08 00:23:16 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.65 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.790 [INFO][3127] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.814 [INFO][3139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" HandleID="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Workload="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.825 [INFO][3139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" HandleID="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Workload="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db700), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.65", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-08 00:23:16.814485677 +0000 UTC"}, Hostname:"10.0.0.65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.826 [INFO][3139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.826 [INFO][3139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.826 [INFO][3139] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.65' May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.827 [INFO][3139] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.831 [INFO][3139] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.835 [INFO][3139] ipam/ipam.go 489: Trying affinity for 192.168.14.64/26 host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.836 [INFO][3139] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.838 [INFO][3139] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.839 [INFO][3139] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.840 [INFO][3139] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8 May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.843 [INFO][3139] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.849 [INFO][3139] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.67/26] block=192.168.14.64/26 handle="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.849 [INFO][3139] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.67/26] handle="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" host="10.0.0.65" May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.849 [INFO][3139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:16.870037 containerd[1441]: 2025-05-08 00:23:16.849 [INFO][3139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.67/26] IPv6=[] ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" HandleID="k8s-pod-network.6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Workload="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" May 8 00:23:16.870565 containerd[1441]: 2025-05-08 00:23:16.851 [INFO][3127] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8dcbf900-485e-469a-b9af-a54de0d1dd17", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:16.870565 containerd[1441]: 2025-05-08 00:23:16.851 [INFO][3127] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.67/32] ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" May 8 00:23:16.870565 containerd[1441]: 2025-05-08 00:23:16.851 [INFO][3127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" May 8 00:23:16.870565 containerd[1441]: 2025-05-08 00:23:16.855 [INFO][3127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" May 8 00:23:16.870707 containerd[1441]: 2025-05-08 00:23:16.856 [INFO][3127] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8dcbf900-485e-469a-b9af-a54de0d1dd17", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"a6:dc:73:9b:7a:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:16.870707 containerd[1441]: 2025-05-08 00:23:16.865 [INFO][3127] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.65-k8s-nfs--server--provisioner--0-eth0" May 8 00:23:16.892607 containerd[1441]: time="2025-05-08T00:23:16.892427121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:23:16.892944 containerd[1441]: time="2025-05-08T00:23:16.892808786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:23:16.892944 containerd[1441]: time="2025-05-08T00:23:16.892845188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:16.893032 containerd[1441]: time="2025-05-08T00:23:16.892974596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:16.913139 systemd[1]: Started cri-containerd-6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8.scope - libcontainer container 6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8. May 8 00:23:16.922452 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:23:16.970898 containerd[1441]: time="2025-05-08T00:23:16.970730429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8dcbf900-485e-469a-b9af-a54de0d1dd17,Namespace:default,Attempt:0,} returns sandbox id \"6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8\"" May 8 00:23:16.972531 containerd[1441]: time="2025-05-08T00:23:16.972500144Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 8 00:23:17.267234 kubelet[1746]: E0508 00:23:17.267110 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:18.101009 update_engine[1425]: I20250508 00:23:18.100800 1425 update_attempter.cc:509] Updating boot flags... May 8 00:23:18.168034 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3215) May 8 00:23:18.267270 kubelet[1746]: E0508 00:23:18.267236 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:18.419180 systemd-networkd[1379]: cali60e51b789ff: Gained IPv6LL May 8 00:23:18.839329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169163980.mount: Deactivated successfully. May 8 00:23:19.268673 kubelet[1746]: E0508 00:23:19.268573 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:20.269704 kubelet[1746]: E0508 00:23:20.269671 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:20.311725 containerd[1441]: time="2025-05-08T00:23:20.310689013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:20.311725 containerd[1441]: time="2025-05-08T00:23:20.311065273Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 8 00:23:20.312230 containerd[1441]: time="2025-05-08T00:23:20.312197814Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:20.314710 containerd[1441]: time="2025-05-08T00:23:20.314673146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:20.315856 containerd[1441]: time="2025-05-08T00:23:20.315824727Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.343288261s" May 8 00:23:20.315927 containerd[1441]: time="2025-05-08T00:23:20.315869930Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 8 00:23:20.318378 containerd[1441]: time="2025-05-08T00:23:20.318166332Z" level=info msg="CreateContainer within sandbox \"6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 8 00:23:20.338031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690162642.mount: Deactivated successfully. May 8 00:23:20.362934 containerd[1441]: time="2025-05-08T00:23:20.362882200Z" level=info msg="CreateContainer within sandbox \"6198484a6362a57ba8a5f88ee2944524732c722d4c1d64c0fbc83b4a7af907e8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"dd722438056a549080c96bde66e978c2031c2dd13a5ec44adb33529f25fd84e6\"" May 8 00:23:20.363352 containerd[1441]: time="2025-05-08T00:23:20.363333064Z" level=info msg="StartContainer for \"dd722438056a549080c96bde66e978c2031c2dd13a5ec44adb33529f25fd84e6\"" May 8 00:23:20.393180 systemd[1]: Started cri-containerd-dd722438056a549080c96bde66e978c2031c2dd13a5ec44adb33529f25fd84e6.scope - libcontainer container dd722438056a549080c96bde66e978c2031c2dd13a5ec44adb33529f25fd84e6. May 8 00:23:20.414132 containerd[1441]: time="2025-05-08T00:23:20.414037292Z" level=info msg="StartContainer for \"dd722438056a549080c96bde66e978c2031c2dd13a5ec44adb33529f25fd84e6\" returns successfully" May 8 00:23:21.270655 kubelet[1746]: E0508 00:23:21.270611 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:22.245599 kubelet[1746]: E0508 00:23:22.245532 1746 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:22.270972 kubelet[1746]: E0508 00:23:22.270922 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:23.271849 kubelet[1746]: E0508 00:23:23.271805 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:24.272826 kubelet[1746]: E0508 00:23:24.272766 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:25.273621 kubelet[1746]: E0508 00:23:25.273560 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:26.274167 kubelet[1746]: E0508 00:23:26.274124 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:27.275130 kubelet[1746]: E0508 00:23:27.275046 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:28.275202 kubelet[1746]: E0508 00:23:28.275133 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:29.276031 kubelet[1746]: E0508 00:23:29.275989 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:30.276991 kubelet[1746]: E0508 00:23:30.276938 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:30.460768 kubelet[1746]: I0508 00:23:30.460704 1746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.116036997 podStartE2EDuration="14.460688737s" podCreationTimestamp="2025-05-08 00:23:16 +0000 UTC" firstStartedPulling="2025-05-08 00:23:16.97197683 +0000 UTC m=+35.610209381" lastFinishedPulling="2025-05-08 00:23:20.31662857 +0000 UTC m=+38.954861121" observedRunningTime="2025-05-08 00:23:20.466164515 +0000 UTC m=+39.104397066" watchObservedRunningTime="2025-05-08 00:23:30.460688737 +0000 UTC m=+49.098921288" May 8 00:23:30.461055 kubelet[1746]: I0508 00:23:30.461035 1746 topology_manager.go:215] "Topology Admit Handler" podUID="df672843-c643-40a6-a3ea-d4c5caf00fc3" podNamespace="default" podName="test-pod-1" May 8 00:23:30.468107 systemd[1]: Created slice kubepods-besteffort-poddf672843_c643_40a6_a3ea_d4c5caf00fc3.slice - libcontainer container kubepods-besteffort-poddf672843_c643_40a6_a3ea_d4c5caf00fc3.slice. May 8 00:23:30.639150 kubelet[1746]: I0508 00:23:30.639107 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-053f5111-fcfc-4510-9134-6eb82c97c656\" (UniqueName: \"kubernetes.io/nfs/df672843-c643-40a6-a3ea-d4c5caf00fc3-pvc-053f5111-fcfc-4510-9134-6eb82c97c656\") pod \"test-pod-1\" (UID: \"df672843-c643-40a6-a3ea-d4c5caf00fc3\") " pod="default/test-pod-1" May 8 00:23:30.639150 kubelet[1746]: I0508 00:23:30.639157 1746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cntbg\" (UniqueName: \"kubernetes.io/projected/df672843-c643-40a6-a3ea-d4c5caf00fc3-kube-api-access-cntbg\") pod \"test-pod-1\" (UID: \"df672843-c643-40a6-a3ea-d4c5caf00fc3\") " pod="default/test-pod-1" May 8 00:23:30.765043 kernel: FS-Cache: Loaded May 8 00:23:30.793938 kernel: RPC: Registered named UNIX socket transport module. May 8 00:23:30.794082 kernel: RPC: Registered udp transport module. May 8 00:23:30.794111 kernel: RPC: Registered tcp transport module. May 8 00:23:30.794129 kernel: RPC: Registered tcp-with-tls transport module. May 8 00:23:30.795845 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 8 00:23:30.972990 kernel: NFS: Registering the id_resolver key type May 8 00:23:30.973123 kernel: Key type id_resolver registered May 8 00:23:30.973157 kernel: Key type id_legacy registered May 8 00:23:30.999614 nfsidmap[3341]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:23:31.003233 nfsidmap[3344]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:23:31.071638 containerd[1441]: time="2025-05-08T00:23:31.071599674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:df672843-c643-40a6-a3ea-d4c5caf00fc3,Namespace:default,Attempt:0,}" May 8 00:23:31.238758 systemd-networkd[1379]: cali5ec59c6bf6e: Link UP May 8 00:23:31.238992 systemd-networkd[1379]: cali5ec59c6bf6e: Gained carrier May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.161 [INFO][3347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.65-k8s-test--pod--1-eth0 default df672843-c643-40a6-a3ea-d4c5caf00fc3 1144 0 2025-05-08 00:23:16 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.65 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.162 [INFO][3347] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-eth0" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.188 [INFO][3361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" HandleID="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Workload="10.0.0.65-k8s-test--pod--1-eth0" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.200 [INFO][3361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" HandleID="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Workload="10.0.0.65-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c2510), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.65", "pod":"test-pod-1", "timestamp":"2025-05-08 00:23:31.188879261 +0000 UTC"}, Hostname:"10.0.0.65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.200 [INFO][3361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.200 [INFO][3361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.200 [INFO][3361] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.65' May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.202 [INFO][3361] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.206 [INFO][3361] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.214 [INFO][3361] ipam/ipam.go 489: Trying affinity for 192.168.14.64/26 host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.216 [INFO][3361] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.218 [INFO][3361] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.218 [INFO][3361] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.222 [INFO][3361] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036 May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.226 [INFO][3361] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.231 [INFO][3361] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.68/26] block=192.168.14.64/26 handle="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.231 [INFO][3361] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.68/26] handle="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" host="10.0.0.65" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.231 [INFO][3361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.231 [INFO][3361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.68/26] IPv6=[] ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" HandleID="k8s-pod-network.4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Workload="10.0.0.65-k8s-test--pod--1-eth0" May 8 00:23:31.248907 containerd[1441]: 2025-05-08 00:23:31.236 [INFO][3347] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"df672843-c643-40a6-a3ea-d4c5caf00fc3", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:31.249906 containerd[1441]: 2025-05-08 00:23:31.236 [INFO][3347] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.68/32] ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-eth0" May 8 00:23:31.249906 containerd[1441]: 2025-05-08 00:23:31.236 [INFO][3347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-eth0" May 8 00:23:31.249906 containerd[1441]: 2025-05-08 00:23:31.239 [INFO][3347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-eth0" May 8 00:23:31.249906 containerd[1441]: 2025-05-08 00:23:31.239 [INFO][3347] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.65-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"df672843-c643-40a6-a3ea-d4c5caf00fc3", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.65", ContainerID:"4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"22:67:6b:b4:99:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:31.249906 containerd[1441]: 2025-05-08 00:23:31.246 [INFO][3347] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.65-k8s-test--pod--1-eth0" May 8 00:23:31.269411 containerd[1441]: time="2025-05-08T00:23:31.269291953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:23:31.269411 containerd[1441]: time="2025-05-08T00:23:31.269372396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:23:31.269411 containerd[1441]: time="2025-05-08T00:23:31.269394876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:31.269641 containerd[1441]: time="2025-05-08T00:23:31.269479559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:31.277552 kubelet[1746]: E0508 00:23:31.277450 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:31.290145 systemd[1]: Started cri-containerd-4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036.scope - libcontainer container 4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036. May 8 00:23:31.300473 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:23:31.316307 containerd[1441]: time="2025-05-08T00:23:31.316259662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:df672843-c643-40a6-a3ea-d4c5caf00fc3,Namespace:default,Attempt:0,} returns sandbox id \"4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036\"" May 8 00:23:31.318192 containerd[1441]: time="2025-05-08T00:23:31.317870395Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:23:31.591248 containerd[1441]: time="2025-05-08T00:23:31.591206569Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:31.593551 containerd[1441]: time="2025-05-08T00:23:31.591763227Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 8 00:23:31.595009 containerd[1441]: time="2025-05-08T00:23:31.594973253Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 277.069937ms" May 8 00:23:31.595109 containerd[1441]: time="2025-05-08T00:23:31.595010374Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 8 00:23:31.596788 containerd[1441]: time="2025-05-08T00:23:31.596761352Z" level=info msg="CreateContainer within sandbox \"4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 8 00:23:31.607186 containerd[1441]: time="2025-05-08T00:23:31.607140894Z" level=info msg="CreateContainer within sandbox \"4b6b403d6fdf6ce666d985bef14ad5ee69bf7ef2a9f919682e76303bab6ea036\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ef3061c6e76dc365fae7588851fd7c58a12f8bb8c99f2c543a73442609626049\"" May 8 00:23:31.607621 containerd[1441]: time="2025-05-08T00:23:31.607591869Z" level=info msg="StartContainer for \"ef3061c6e76dc365fae7588851fd7c58a12f8bb8c99f2c543a73442609626049\"" May 8 00:23:31.645137 systemd[1]: Started cri-containerd-ef3061c6e76dc365fae7588851fd7c58a12f8bb8c99f2c543a73442609626049.scope - libcontainer container ef3061c6e76dc365fae7588851fd7c58a12f8bb8c99f2c543a73442609626049. May 8 00:23:31.671378 containerd[1441]: time="2025-05-08T00:23:31.671334331Z" level=info msg="StartContainer for \"ef3061c6e76dc365fae7588851fd7c58a12f8bb8c99f2c543a73442609626049\" returns successfully" May 8 00:23:32.280787 kubelet[1746]: E0508 00:23:32.278230 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:32.947165 systemd-networkd[1379]: cali5ec59c6bf6e: Gained IPv6LL May 8 00:23:33.279380 kubelet[1746]: E0508 00:23:33.279250 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:23:34.279560 kubelet[1746]: E0508 00:23:34.279515 1746 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"