Jul 9 23:34:36.088083 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 23:34:36.088105 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jul 9 22:16:44 -00 2025 Jul 9 23:34:36.088115 kernel: KASLR enabled Jul 9 23:34:36.088121 kernel: efi: EFI v2.7 by EDK II Jul 9 23:34:36.088126 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jul 9 23:34:36.088132 kernel: random: crng init done Jul 9 23:34:36.088139 kernel: secureboot: Secure boot disabled Jul 9 23:34:36.088145 kernel: ACPI: Early table checksum verification disabled Jul 9 23:34:36.088151 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jul 9 23:34:36.088158 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 23:34:36.088190 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088196 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088202 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088208 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088215 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088224 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088231 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088237 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088243 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:34:36.088249 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 23:34:36.088255 kernel: NUMA: Failed to initialise from firmware Jul 9 23:34:36.088262 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:34:36.088268 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 9 23:34:36.088274 kernel: Zone ranges: Jul 9 23:34:36.088280 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:34:36.088297 kernel: DMA32 empty Jul 9 23:34:36.088303 kernel: Normal empty Jul 9 23:34:36.088309 kernel: Movable zone start for each node Jul 9 23:34:36.088315 kernel: Early memory node ranges Jul 9 23:34:36.088321 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jul 9 23:34:36.088327 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jul 9 23:34:36.088333 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jul 9 23:34:36.088339 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 9 23:34:36.088346 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 9 23:34:36.088352 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 9 23:34:36.088358 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 9 23:34:36.088364 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 9 23:34:36.088371 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 23:34:36.088377 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:34:36.088384 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 23:34:36.088392 kernel: psci: probing for conduit method from ACPI. Jul 9 23:34:36.088399 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:34:36.088406 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:34:36.088414 kernel: psci: Trusted OS migration not required Jul 9 23:34:36.088421 kernel: psci: SMC Calling Convention v1.1 Jul 9 23:34:36.088427 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 23:34:36.088434 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 9 23:34:36.088440 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 9 23:34:36.088447 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 23:34:36.088453 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:34:36.088460 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:34:36.088467 kernel: CPU features: detected: Hardware dirty bit management Jul 9 23:34:36.088473 kernel: CPU features: detected: Spectre-v4 Jul 9 23:34:36.088481 kernel: CPU features: detected: Spectre-BHB Jul 9 23:34:36.088488 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:34:36.088494 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:34:36.088501 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 23:34:36.088507 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:34:36.088514 kernel: alternatives: applying boot alternatives Jul 9 23:34:36.088521 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0015aac1230224a9801f0a27c5572cf2e16bbbc9d558c55da5d05d3e334812cd Jul 9 23:34:36.088528 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:34:36.088535 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:34:36.088541 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:34:36.088549 kernel: Fallback order for Node 0: 0 Jul 9 23:34:36.088557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 9 23:34:36.088563 kernel: Policy zone: DMA Jul 9 23:34:36.088570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:34:36.088577 kernel: software IO TLB: area num 4. Jul 9 23:34:36.088583 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 9 23:34:36.088590 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) Jul 9 23:34:36.088597 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 23:34:36.088604 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:34:36.088612 kernel: rcu: RCU event tracing is enabled. Jul 9 23:34:36.088619 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 23:34:36.088625 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:34:36.088632 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:34:36.088640 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:34:36.088646 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 23:34:36.088653 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:34:36.088659 kernel: GICv3: 256 SPIs implemented Jul 9 23:34:36.088665 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:34:36.088672 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:34:36.088678 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 23:34:36.088685 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 23:34:36.088691 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 23:34:36.088698 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 9 23:34:36.088705 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 9 23:34:36.088712 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 9 23:34:36.088719 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 9 23:34:36.088725 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:34:36.088732 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:34:36.088738 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 23:34:36.088745 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 23:34:36.088752 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 23:34:36.088759 kernel: arm-pv: using stolen time PV Jul 9 23:34:36.088766 kernel: Console: colour dummy device 80x25 Jul 9 23:34:36.088773 kernel: ACPI: Core revision 20230628 Jul 9 23:34:36.088780 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 23:34:36.088788 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:34:36.088794 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 9 23:34:36.088801 kernel: landlock: Up and running. Jul 9 23:34:36.088808 kernel: SELinux: Initializing. Jul 9 23:34:36.088815 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:34:36.088822 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:34:36.088842 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:34:36.088849 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:34:36.088856 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:34:36.088864 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:34:36.088871 kernel: Platform MSI: ITS@0x8080000 domain created Jul 9 23:34:36.088878 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 9 23:34:36.088885 kernel: Remapping and enabling EFI services. Jul 9 23:34:36.088891 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:34:36.088898 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:34:36.088905 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 23:34:36.088911 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 9 23:34:36.088918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:34:36.088926 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 23:34:36.088934 kernel: Detected PIPT I-cache on CPU2 Jul 9 23:34:36.088945 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 23:34:36.088954 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 9 23:34:36.088961 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:34:36.088968 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 23:34:36.092279 kernel: Detected PIPT I-cache on CPU3 Jul 9 23:34:36.092322 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 23:34:36.092331 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 9 23:34:36.092347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:34:36.092355 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 23:34:36.092362 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 23:34:36.092370 kernel: SMP: Total of 4 processors activated. Jul 9 23:34:36.092377 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:34:36.092385 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:34:36.092392 kernel: CPU features: detected: Common not Private translations Jul 9 23:34:36.092400 kernel: CPU features: detected: CRC32 instructions Jul 9 23:34:36.092408 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 23:34:36.092416 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:34:36.092424 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:34:36.092431 kernel: CPU features: detected: Privileged Access Never Jul 9 23:34:36.092439 kernel: CPU features: detected: RAS Extension Support Jul 9 23:34:36.092446 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:34:36.092454 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:34:36.092461 kernel: alternatives: applying system-wide alternatives Jul 9 23:34:36.092468 kernel: devtmpfs: initialized Jul 9 23:34:36.092476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:34:36.092485 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 23:34:36.092493 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:34:36.092500 kernel: SMBIOS 3.0.0 present. Jul 9 23:34:36.092508 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 23:34:36.092515 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:34:36.092523 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:34:36.092530 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:34:36.092538 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:34:36.092547 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:34:36.092554 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 9 23:34:36.092562 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:34:36.092569 kernel: cpuidle: using governor menu Jul 9 23:34:36.092577 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:34:36.092585 kernel: ASID allocator initialised with 32768 entries Jul 9 23:34:36.092592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:34:36.092600 kernel: Serial: AMBA PL011 UART driver Jul 9 23:34:36.092607 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:34:36.092616 kernel: Modules: 0 pages in range for non-PLT usage Jul 9 23:34:36.092624 kernel: Modules: 509264 pages in range for PLT usage Jul 9 23:34:36.092631 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:34:36.092638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:34:36.092646 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:34:36.092653 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:34:36.092661 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:34:36.092668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:34:36.092675 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:34:36.092684 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:34:36.092692 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:34:36.092699 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:34:36.092707 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:34:36.092714 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:34:36.092722 kernel: ACPI: Interpreter enabled Jul 9 23:34:36.092729 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:34:36.092737 kernel: ACPI: MCFG table detected, 1 entries Jul 9 23:34:36.092744 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:34:36.092752 kernel: printk: console [ttyAMA0] enabled Jul 9 23:34:36.092761 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 23:34:36.092950 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 23:34:36.093029 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 23:34:36.093105 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 23:34:36.093189 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 23:34:36.093259 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 23:34:36.093269 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 23:34:36.093279 kernel: PCI host bridge to bus 0000:00 Jul 9 23:34:36.093373 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 23:34:36.093438 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 23:34:36.093510 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 23:34:36.093577 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 23:34:36.093682 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 9 23:34:36.093764 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 9 23:34:36.093834 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 9 23:34:36.093901 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 9 23:34:36.093968 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 23:34:36.094033 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 23:34:36.094100 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 9 23:34:36.094191 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 9 23:34:36.094262 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 23:34:36.094334 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 23:34:36.094586 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 23:34:36.094599 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 23:34:36.094607 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 23:34:36.094615 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 23:34:36.094622 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 23:34:36.094629 kernel: iommu: Default domain type: Translated Jul 9 23:34:36.094642 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:34:36.094649 kernel: efivars: Registered efivars operations Jul 9 23:34:36.094657 kernel: vgaarb: loaded Jul 9 23:34:36.094664 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:34:36.094671 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:34:36.094679 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:34:36.094686 kernel: pnp: PnP ACPI init Jul 9 23:34:36.094780 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 23:34:36.094800 kernel: pnp: PnP ACPI: found 1 devices Jul 9 23:34:36.094813 kernel: NET: Registered PF_INET protocol family Jul 9 23:34:36.094822 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:34:36.094832 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:34:36.094841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:34:36.094848 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:34:36.094856 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:34:36.094863 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:34:36.094871 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:34:36.094880 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:34:36.094889 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:34:36.094897 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:34:36.094904 kernel: kvm [1]: HYP mode not available Jul 9 23:34:36.094911 kernel: Initialise system trusted keyrings Jul 9 23:34:36.094918 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:34:36.094925 kernel: Key type asymmetric registered Jul 9 23:34:36.094932 kernel: Asymmetric key parser 'x509' registered Jul 9 23:34:36.094939 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 23:34:36.094948 kernel: io scheduler mq-deadline registered Jul 9 23:34:36.094955 kernel: io scheduler kyber registered Jul 9 23:34:36.094962 kernel: io scheduler bfq registered Jul 9 23:34:36.094970 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 23:34:36.094977 kernel: ACPI: button: Power Button [PWRB] Jul 9 23:34:36.094985 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 23:34:36.095064 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 23:34:36.095074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:34:36.095082 kernel: thunder_xcv, ver 1.0 Jul 9 23:34:36.095089 kernel: thunder_bgx, ver 1.0 Jul 9 23:34:36.095099 kernel: nicpf, ver 1.0 Jul 9 23:34:36.095106 kernel: nicvf, ver 1.0 Jul 9 23:34:36.095238 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:34:36.095320 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:34:35 UTC (1752104075) Jul 9 23:34:36.095330 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:34:36.095338 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 9 23:34:36.095346 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 9 23:34:36.095357 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:34:36.095364 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:34:36.095372 kernel: Segment Routing with IPv6 Jul 9 23:34:36.095379 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:34:36.095386 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:34:36.095394 kernel: Key type dns_resolver registered Jul 9 23:34:36.095401 kernel: registered taskstats version 1 Jul 9 23:34:36.095408 kernel: Loading compiled-in X.509 certificates Jul 9 23:34:36.095416 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 4bfa8b95f62181795e6ed675a7c4e4962d1307b1' Jul 9 23:34:36.095423 kernel: Key type .fscrypt registered Jul 9 23:34:36.095431 kernel: Key type fscrypt-provisioning registered Jul 9 23:34:36.095438 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:34:36.095446 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:34:36.095453 kernel: ima: No architecture policies found Jul 9 23:34:36.095460 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:34:36.095467 kernel: clk: Disabling unused clocks Jul 9 23:34:36.095474 kernel: Freeing unused kernel memory: 38336K Jul 9 23:34:36.095481 kernel: Run /init as init process Jul 9 23:34:36.095490 kernel: with arguments: Jul 9 23:34:36.095498 kernel: /init Jul 9 23:34:36.095505 kernel: with environment: Jul 9 23:34:36.095512 kernel: HOME=/ Jul 9 23:34:36.095519 kernel: TERM=linux Jul 9 23:34:36.095526 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:34:36.095534 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:34:36.095545 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:34:36.095555 systemd[1]: Detected virtualization kvm. Jul 9 23:34:36.095562 systemd[1]: Detected architecture arm64. Jul 9 23:34:36.095570 systemd[1]: Running in initrd. Jul 9 23:34:36.095577 systemd[1]: No hostname configured, using default hostname. Jul 9 23:34:36.095585 systemd[1]: Hostname set to . Jul 9 23:34:36.095593 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:34:36.095601 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:34:36.095609 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:34:36.095618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:34:36.095626 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:34:36.095634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:34:36.095642 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:34:36.095650 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:34:36.095659 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:34:36.095667 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:34:36.095676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:34:36.095684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:34:36.095691 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:34:36.095699 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:34:36.095707 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:34:36.095714 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:34:36.095722 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:34:36.095729 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:34:36.095737 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:34:36.095746 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:34:36.095754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:34:36.095762 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:34:36.095770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:34:36.095777 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:34:36.095785 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:34:36.095793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:34:36.095800 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:34:36.095810 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:34:36.095817 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:34:36.095825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:34:36.095833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:34:36.095841 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:34:36.095849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:34:36.095859 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:34:36.095867 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:34:36.095875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:34:36.095883 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:34:36.095915 systemd-journald[237]: Collecting audit messages is disabled. Jul 9 23:34:36.095937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:34:36.095945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:34:36.095953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:34:36.095962 systemd-journald[237]: Journal started Jul 9 23:34:36.095982 systemd-journald[237]: Runtime Journal (/run/log/journal/aa4c33655e0a493eb086427be155f914) is 5.9M, max 47.3M, 41.4M free. Jul 9 23:34:36.079046 systemd-modules-load[238]: Inserted module 'overlay' Jul 9 23:34:36.102592 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:34:36.102623 kernel: Bridge firewalling registered Jul 9 23:34:36.098780 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 9 23:34:36.104185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:34:36.109435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:34:36.112120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:34:36.115958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:34:36.119744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:34:36.123104 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:34:36.125259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:34:36.128389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:34:36.133393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:34:36.139781 dracut-cmdline[276]: dracut-dracut-053 Jul 9 23:34:36.146096 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0015aac1230224a9801f0a27c5572cf2e16bbbc9d558c55da5d05d3e334812cd Jul 9 23:34:36.179756 systemd-resolved[282]: Positive Trust Anchors: Jul 9 23:34:36.179775 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:34:36.179806 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:34:36.184724 systemd-resolved[282]: Defaulting to hostname 'linux'. Jul 9 23:34:36.186299 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:34:36.190439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:34:36.244180 kernel: SCSI subsystem initialized Jul 9 23:34:36.248190 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:34:36.259209 kernel: iscsi: registered transport (tcp) Jul 9 23:34:36.274200 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:34:36.274259 kernel: QLogic iSCSI HBA Driver Jul 9 23:34:36.340224 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:34:36.354416 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:34:36.377194 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:34:36.377279 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:34:36.377302 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 9 23:34:36.433196 kernel: raid6: neonx8 gen() 15610 MB/s Jul 9 23:34:36.448736 kernel: raid6: neonx4 gen() 15643 MB/s Jul 9 23:34:36.465220 kernel: raid6: neonx2 gen() 13013 MB/s Jul 9 23:34:36.482198 kernel: raid6: neonx1 gen() 10412 MB/s Jul 9 23:34:36.499208 kernel: raid6: int64x8 gen() 5654 MB/s Jul 9 23:34:36.516212 kernel: raid6: int64x4 gen() 6688 MB/s Jul 9 23:34:36.535337 kernel: raid6: int64x2 gen() 5406 MB/s Jul 9 23:34:36.551385 kernel: raid6: int64x1 gen() 4923 MB/s Jul 9 23:34:36.551451 kernel: raid6: using algorithm neonx4 gen() 15643 MB/s Jul 9 23:34:36.569389 kernel: raid6: .... xor() 12406 MB/s, rmw enabled Jul 9 23:34:36.569462 kernel: raid6: using neon recovery algorithm Jul 9 23:34:36.575686 kernel: xor: measuring software checksum speed Jul 9 23:34:36.575743 kernel: 8regs : 21042 MB/sec Jul 9 23:34:36.575764 kernel: 32regs : 21647 MB/sec Jul 9 23:34:36.575774 kernel: arm64_neon : 27710 MB/sec Jul 9 23:34:36.576290 kernel: xor: using function: arm64_neon (27710 MB/sec) Jul 9 23:34:36.639263 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:34:36.653249 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:34:36.665642 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:34:36.692301 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 9 23:34:36.696297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:34:36.702384 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:34:36.720539 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jul 9 23:34:36.756454 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:34:36.770365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:34:36.826861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:34:36.837459 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:34:36.848906 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:34:36.851722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:34:36.853153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:34:36.855863 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:34:36.864377 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:34:36.882072 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:34:36.905434 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 23:34:36.906009 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 23:34:36.911781 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 23:34:36.911861 kernel: GPT:9289727 != 19775487 Jul 9 23:34:36.911872 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 23:34:36.913636 kernel: GPT:9289727 != 19775487 Jul 9 23:34:36.913691 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 23:34:36.913702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:34:36.914217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:34:36.914362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:34:36.917950 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:34:36.925272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:34:36.925453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:34:36.929023 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:34:36.939501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:34:36.954877 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (517) Jul 9 23:34:36.955024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:34:36.961243 kernel: BTRFS: device fsid f1aa6a9e-9364-412a-9943-6643deee9705 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (515) Jul 9 23:34:36.963625 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 23:34:36.975823 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 23:34:36.988055 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:34:36.994541 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 23:34:36.995809 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 23:34:37.011378 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:34:37.013352 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:34:37.020648 disk-uuid[552]: Primary Header is updated. Jul 9 23:34:37.020648 disk-uuid[552]: Secondary Entries is updated. Jul 9 23:34:37.020648 disk-uuid[552]: Secondary Header is updated. Jul 9 23:34:37.028215 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:34:37.043569 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:34:38.056138 disk-uuid[553]: The operation has completed successfully. Jul 9 23:34:38.057462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:34:38.084024 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:34:38.084127 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:34:38.120381 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:34:38.123577 sh[573]: Success Jul 9 23:34:38.144196 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 9 23:34:38.194122 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:34:38.196500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:34:38.199683 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:34:38.218213 kernel: BTRFS info (device dm-0): first mount of filesystem f1aa6a9e-9364-412a-9943-6643deee9705 Jul 9 23:34:38.218270 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:34:38.218287 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 9 23:34:38.220192 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 9 23:34:38.220209 kernel: BTRFS info (device dm-0): using free space tree Jul 9 23:34:38.225582 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:34:38.227027 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:34:38.235341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:34:38.237036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:34:38.258228 kernel: BTRFS info (device vda6): first mount of filesystem 6910ff8a-fc92-44ca-9e13-30ce358da42d Jul 9 23:34:38.258290 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:34:38.258309 kernel: BTRFS info (device vda6): using free space tree Jul 9 23:34:38.263188 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 23:34:38.268193 kernel: BTRFS info (device vda6): last unmount of filesystem 6910ff8a-fc92-44ca-9e13-30ce358da42d Jul 9 23:34:38.276236 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:34:38.283376 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:34:38.334143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:34:38.342414 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:34:38.376625 systemd-networkd[755]: lo: Link UP Jul 9 23:34:38.377590 systemd-networkd[755]: lo: Gained carrier Jul 9 23:34:38.378656 systemd-networkd[755]: Enumeration completed Jul 9 23:34:38.378791 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:34:38.380203 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:34:38.380207 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:34:38.380970 systemd-networkd[755]: eth0: Link UP Jul 9 23:34:38.380973 systemd-networkd[755]: eth0: Gained carrier Jul 9 23:34:38.380979 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:34:38.381101 systemd[1]: Reached target network.target - Network. Jul 9 23:34:38.398818 ignition[683]: Ignition 2.20.0 Jul 9 23:34:38.398827 ignition[683]: Stage: fetch-offline Jul 9 23:34:38.398864 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:34:38.398873 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:34:38.399022 ignition[683]: parsed url from cmdline: "" Jul 9 23:34:38.399025 ignition[683]: no config URL provided Jul 9 23:34:38.399029 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:34:38.399037 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:34:38.399060 ignition[683]: op(1): [started] loading QEMU firmware config module Jul 9 23:34:38.399065 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 23:34:38.407288 ignition[683]: op(1): [finished] loading QEMU firmware config module Jul 9 23:34:38.415225 systemd-networkd[755]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:34:38.417061 ignition[683]: parsing config with SHA512: 7533298836b6f9244f8fec3f4e0e2d10a5d93796478271b034256a6aaf43433b32e0c18338aadb8310ae18c94af987782278d91fa4d24b85e0e80c788c2fe4aa Jul 9 23:34:38.423419 unknown[683]: fetched base config from "system" Jul 9 23:34:38.423435 unknown[683]: fetched user config from "qemu" Jul 9 23:34:38.423811 ignition[683]: fetch-offline: fetch-offline passed Jul 9 23:34:38.423946 ignition[683]: Ignition finished successfully Jul 9 23:34:38.426382 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:34:38.428105 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 23:34:38.438399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:34:38.465136 ignition[770]: Ignition 2.20.0 Jul 9 23:34:38.465149 ignition[770]: Stage: kargs Jul 9 23:34:38.465346 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:34:38.465357 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:34:38.468148 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:34:38.466033 ignition[770]: kargs: kargs passed Jul 9 23:34:38.466075 ignition[770]: Ignition finished successfully Jul 9 23:34:38.474440 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:34:38.485002 ignition[778]: Ignition 2.20.0 Jul 9 23:34:38.485012 ignition[778]: Stage: disks Jul 9 23:34:38.485206 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:34:38.485217 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:34:38.487580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:34:38.485882 ignition[778]: disks: disks passed Jul 9 23:34:38.489449 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:34:38.485926 ignition[778]: Ignition finished successfully Jul 9 23:34:38.491094 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:34:38.492747 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:34:38.494637 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:34:38.496140 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:34:38.506340 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:34:38.519970 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 9 23:34:38.630676 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:34:38.639333 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:34:38.705191 kernel: EXT4-fs (vda9): mounted filesystem 38f5ef29-b0bb-4998-a2b1-92f576d3cbb2 r/w with ordered data mode. Quota mode: none. Jul 9 23:34:38.705244 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:34:38.706674 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:34:38.730298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:34:38.732603 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:34:38.733614 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 23:34:38.733674 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:34:38.733700 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:34:38.742032 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:34:38.744035 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Jul 9 23:34:38.744017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:34:38.752105 kernel: BTRFS info (device vda6): first mount of filesystem 6910ff8a-fc92-44ca-9e13-30ce358da42d Jul 9 23:34:38.752146 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:34:38.752156 kernel: BTRFS info (device vda6): using free space tree Jul 9 23:34:38.764231 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 23:34:38.767306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:34:38.811658 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:34:38.816133 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:34:38.825250 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:34:38.828679 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:34:38.937975 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:34:38.955341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:34:38.957376 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:34:38.965192 kernel: BTRFS info (device vda6): last unmount of filesystem 6910ff8a-fc92-44ca-9e13-30ce358da42d Jul 9 23:34:38.993065 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:34:39.000145 ignition[912]: INFO : Ignition 2.20.0 Jul 9 23:34:39.000145 ignition[912]: INFO : Stage: mount Jul 9 23:34:39.002538 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:34:39.002538 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:34:39.002538 ignition[912]: INFO : mount: mount passed Jul 9 23:34:39.002538 ignition[912]: INFO : Ignition finished successfully Jul 9 23:34:39.002716 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:34:39.013391 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:34:39.217135 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:34:39.228426 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:34:39.247759 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jul 9 23:34:39.252708 kernel: BTRFS info (device vda6): first mount of filesystem 6910ff8a-fc92-44ca-9e13-30ce358da42d Jul 9 23:34:39.252774 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:34:39.252785 kernel: BTRFS info (device vda6): using free space tree Jul 9 23:34:39.263202 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 23:34:39.264527 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:34:39.298379 ignition[944]: INFO : Ignition 2.20.0 Jul 9 23:34:39.298379 ignition[944]: INFO : Stage: files Jul 9 23:34:39.300256 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:34:39.300256 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:34:39.300256 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:34:39.303742 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:34:39.303742 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:34:39.307734 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:34:39.309153 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:34:39.309153 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:34:39.308332 unknown[944]: wrote ssh authorized keys file for user: core Jul 9 23:34:39.312847 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:34:39.312847 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:34:39.312847 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:34:39.312847 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:34:39.312847 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:34:39.312847 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:34:39.312847 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:34:39.327605 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 9 23:34:39.910678 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 9 23:34:39.913933 systemd-networkd[755]: eth0: Gained IPv6LL Jul 9 23:34:40.270539 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 23:34:40.270539 ignition[944]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 9 23:34:40.278280 ignition[944]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:34:40.278280 ignition[944]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:34:40.278280 ignition[944]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 9 23:34:40.278280 ignition[944]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 23:34:40.341018 ignition[944]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:34:40.344844 ignition[944]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:34:40.346751 ignition[944]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 23:34:40.346751 ignition[944]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:34:40.346751 ignition[944]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:34:40.346751 ignition[944]: INFO : files: files passed Jul 9 23:34:40.346751 ignition[944]: INFO : Ignition finished successfully Jul 9 23:34:40.347980 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:34:40.359482 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:34:40.364964 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:34:40.366953 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:34:40.367082 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:34:40.387468 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 23:34:40.391953 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:34:40.391953 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:34:40.396373 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:34:40.395752 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:34:40.399732 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:34:40.417484 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:34:40.454712 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:34:40.454839 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:34:40.456804 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:34:40.460921 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:34:40.469596 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:34:40.470707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:34:40.497426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:34:40.511425 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:34:40.520899 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:34:40.522255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:34:40.524395 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:34:40.525419 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:34:40.525567 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:34:40.528347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:34:40.530313 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:34:40.532909 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:34:40.534757 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:34:40.536817 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:34:40.539083 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:34:40.541051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:34:40.543685 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:34:40.545991 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:34:40.548496 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:34:40.550413 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:34:40.550560 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:34:40.555382 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:34:40.556595 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:34:40.558648 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:34:40.562312 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:34:40.563731 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:34:40.563880 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:34:40.567681 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:34:40.567830 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:34:40.570158 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:34:40.572102 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:34:40.576297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:34:40.579139 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:34:40.580957 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:34:40.584594 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:34:40.584694 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:34:40.586849 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:34:40.586931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:34:40.589171 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:34:40.591448 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:34:40.593843 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:34:40.593958 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:34:40.604474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:34:40.605485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:34:40.605676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:34:40.611490 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:34:40.612424 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:34:40.612600 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:34:40.616337 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:34:40.621054 ignition[999]: INFO : Ignition 2.20.0 Jul 9 23:34:40.621054 ignition[999]: INFO : Stage: umount Jul 9 23:34:40.616458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:34:40.624220 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:34:40.624220 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:34:40.624220 ignition[999]: INFO : umount: umount passed Jul 9 23:34:40.624220 ignition[999]: INFO : Ignition finished successfully Jul 9 23:34:40.622035 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:34:40.623232 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:34:40.628059 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:34:40.628150 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:34:40.630668 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:34:40.631909 systemd[1]: Stopped target network.target - Network. Jul 9 23:34:40.633757 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:34:40.633850 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:34:40.635875 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:34:40.635934 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:34:40.637950 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:34:40.638008 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:34:40.639728 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:34:40.639784 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:34:40.641971 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:34:40.644286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:34:40.662605 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:34:40.662741 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:34:40.672778 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:34:40.673063 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:34:40.673209 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:34:40.677870 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:34:40.678691 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:34:40.678756 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:34:40.691384 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:34:40.692368 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:34:40.692462 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:34:40.694629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:34:40.694688 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:34:40.698323 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:34:40.698390 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:34:40.700380 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:34:40.700446 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:34:40.704107 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:34:40.707014 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:34:40.707086 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:34:40.716952 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:34:40.717104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:34:40.731025 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:34:40.731241 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:34:40.734340 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:34:40.734421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:34:40.735740 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:34:40.735789 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:34:40.737825 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:34:40.737903 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:34:40.740899 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:34:40.740977 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:34:40.743986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:34:40.744063 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:34:40.763510 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:34:40.764660 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:34:40.764751 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:34:40.768359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:34:40.768422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:34:40.773831 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:34:40.773910 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:34:40.774452 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:34:40.774688 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:34:40.977367 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:34:40.977501 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:34:40.979544 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:34:40.981038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:34:40.981114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:34:40.991439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:34:40.999048 systemd[1]: Switching root. Jul 9 23:34:41.030598 systemd-journald[237]: Journal stopped Jul 9 23:34:42.127420 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 9 23:34:42.127476 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:34:42.127488 kernel: SELinux: policy capability open_perms=1 Jul 9 23:34:42.127498 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:34:42.127507 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:34:42.127520 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:34:42.127530 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:34:42.127540 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:34:42.127549 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:34:42.127563 kernel: audit: type=1403 audit(1752104081.346:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:34:42.127574 systemd[1]: Successfully loaded SELinux policy in 38.980ms. Jul 9 23:34:42.127591 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.637ms. Jul 9 23:34:42.127603 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:34:42.127615 systemd[1]: Detected virtualization kvm. Jul 9 23:34:42.127629 systemd[1]: Detected architecture arm64. Jul 9 23:34:42.127639 systemd[1]: Detected first boot. Jul 9 23:34:42.127649 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:34:42.127660 zram_generator::config[1046]: No configuration found. Jul 9 23:34:42.127672 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:34:42.127682 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:34:42.127693 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:34:42.127704 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:34:42.127715 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:34:42.127725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:34:42.127735 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:34:42.127746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:34:42.127756 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:34:42.127768 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:34:42.127779 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:34:42.127789 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:34:42.127800 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:34:42.127810 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:34:42.127820 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:34:42.127831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:34:42.127845 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:34:42.127857 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:34:42.127869 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:34:42.127880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:34:42.127890 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:34:42.127900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:34:42.127911 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:34:42.127921 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:34:42.127931 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:34:42.127944 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:34:42.127954 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:34:42.127965 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:34:42.127975 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:34:42.127986 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:34:42.127996 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:34:42.128007 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:34:42.128017 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:34:42.128027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:34:42.128040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:34:42.128050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:34:42.128061 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:34:42.128071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:34:42.128083 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:34:42.128093 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:34:42.128104 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:34:42.128115 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:34:42.128125 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:34:42.128137 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:34:42.128148 systemd[1]: Reached target machines.target - Containers. Jul 9 23:34:42.128158 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:34:42.128177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:34:42.128189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:34:42.128200 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:34:42.128210 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:34:42.128221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:34:42.128232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:34:42.128244 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:34:42.128254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:34:42.128270 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:34:42.128283 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:34:42.128293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:34:42.128303 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:34:42.128314 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:34:42.128326 kernel: loop: module loaded Jul 9 23:34:42.128338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:34:42.128349 kernel: fuse: init (API version 7.39) Jul 9 23:34:42.128359 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:34:42.128369 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:34:42.128380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:34:42.128390 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:34:42.128400 kernel: ACPI: bus type drm_connector registered Jul 9 23:34:42.128410 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:34:42.128421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:34:42.128433 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:34:42.128444 systemd[1]: Stopped verity-setup.service. Jul 9 23:34:42.128454 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:34:42.128489 systemd-journald[1111]: Collecting audit messages is disabled. Jul 9 23:34:42.128512 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:34:42.128523 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:34:42.128534 systemd-journald[1111]: Journal started Jul 9 23:34:42.128556 systemd-journald[1111]: Runtime Journal (/run/log/journal/aa4c33655e0a493eb086427be155f914) is 5.9M, max 47.3M, 41.4M free. Jul 9 23:34:41.870837 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:34:41.885311 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 23:34:41.885736 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:34:42.131193 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:34:42.131900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:34:42.133276 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:34:42.134714 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:34:42.136053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:34:42.137746 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:34:42.137946 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:34:42.139613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:34:42.139800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:34:42.141459 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:34:42.142926 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:34:42.143116 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:34:42.144649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:34:42.144829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:34:42.146359 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:34:42.147511 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:34:42.148938 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:34:42.149138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:34:42.150778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:34:42.153688 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:34:42.155460 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:34:42.157110 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:34:42.175053 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:34:42.191360 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:34:42.197526 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:34:42.198756 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:34:42.198819 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:34:42.205553 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:34:42.208245 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:34:42.210765 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:34:42.212052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:34:42.213556 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:34:42.217223 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:34:42.218458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:34:42.219731 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:34:42.222765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:34:42.224211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:34:42.228343 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:34:42.231544 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:34:42.232576 systemd-journald[1111]: Time spent on flushing to /var/log/journal/aa4c33655e0a493eb086427be155f914 is 20.495ms for 851 entries. Jul 9 23:34:42.232576 systemd-journald[1111]: System Journal (/var/log/journal/aa4c33655e0a493eb086427be155f914) is 8M, max 195.6M, 187.6M free. Jul 9 23:34:42.324187 systemd-journald[1111]: Received client request to flush runtime journal. Jul 9 23:34:42.324276 kernel: loop0: detected capacity change from 0 to 123192 Jul 9 23:34:42.324317 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:34:42.236203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:34:42.237884 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:34:42.239327 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:34:42.240891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:34:42.261508 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 9 23:34:42.269726 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:34:42.280056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:34:42.284100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:34:42.299512 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:34:42.301422 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 9 23:34:42.328228 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:34:42.335940 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:34:42.346203 kernel: loop1: detected capacity change from 0 to 113512 Jul 9 23:34:42.349484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:34:42.391612 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 9 23:34:42.391630 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 9 23:34:42.396561 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:34:42.405320 kernel: loop2: detected capacity change from 0 to 207008 Jul 9 23:34:42.436220 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:34:42.458390 kernel: loop3: detected capacity change from 0 to 123192 Jul 9 23:34:42.471236 kernel: loop4: detected capacity change from 0 to 113512 Jul 9 23:34:42.484594 kernel: loop5: detected capacity change from 0 to 207008 Jul 9 23:34:42.492029 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 23:34:42.492536 (sd-merge)[1191]: Merged extensions into '/usr'. Jul 9 23:34:42.499836 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:34:42.499865 systemd[1]: Reloading... Jul 9 23:34:42.574534 zram_generator::config[1220]: No configuration found. Jul 9 23:34:42.679845 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:34:42.686871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:34:42.748909 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:34:42.749400 systemd[1]: Reloading finished in 249 ms. Jul 9 23:34:42.775251 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:34:42.778201 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:34:42.790678 systemd[1]: Starting ensure-sysext.service... Jul 9 23:34:42.792832 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:34:42.807799 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:34:42.807954 systemd[1]: Reloading... Jul 9 23:34:42.810750 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:34:42.810957 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:34:42.813093 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:34:42.813372 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jul 9 23:34:42.813424 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jul 9 23:34:42.816292 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:34:42.816301 systemd-tmpfiles[1255]: Skipping /boot Jul 9 23:34:42.826510 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:34:42.826527 systemd-tmpfiles[1255]: Skipping /boot Jul 9 23:34:42.861193 zram_generator::config[1284]: No configuration found. Jul 9 23:34:42.954785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:34:43.016461 systemd[1]: Reloading finished in 208 ms. Jul 9 23:34:43.030226 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:34:43.047023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:34:43.058206 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:34:43.061069 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:34:43.063793 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:34:43.068339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:34:43.085517 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:34:43.090692 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:34:43.095200 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:34:43.100794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:34:43.109665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:34:43.114507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:34:43.121559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:34:43.122924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:34:43.123083 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:34:43.124852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:34:43.130541 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:34:43.131822 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jul 9 23:34:43.136362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:34:43.138290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:34:43.138440 augenrules[1351]: No rules Jul 9 23:34:43.140331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:34:43.140509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:34:43.142673 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:34:43.142875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:34:43.146941 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:34:43.147150 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:34:43.152215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:34:43.154276 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:34:43.159816 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:34:43.170206 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:34:43.178283 systemd[1]: Finished ensure-sysext.service. Jul 9 23:34:43.197538 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:34:43.198802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:34:43.200219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:34:43.202753 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:34:43.206892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:34:43.210474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:34:43.212418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:34:43.212471 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:34:43.217004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:34:43.220268 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 23:34:43.221434 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:34:43.221814 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:34:43.223786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:34:43.225899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:34:43.227738 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:34:43.227918 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:34:43.231811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:34:43.232000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:34:43.233971 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:34:43.234154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:34:43.240449 augenrules[1387]: /sbin/augenrules: No change Jul 9 23:34:43.243889 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:34:43.244072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:34:43.244131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:34:43.248214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Jul 9 23:34:43.259513 augenrules[1419]: No rules Jul 9 23:34:43.262851 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:34:43.263126 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:34:43.329448 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 23:34:43.331614 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:34:43.349959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:34:43.353904 systemd-resolved[1323]: Positive Trust Anchors: Jul 9 23:34:43.360509 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:34:43.360593 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:34:43.367440 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:34:43.378516 systemd-resolved[1323]: Defaulting to hostname 'linux'. Jul 9 23:34:43.382349 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:34:43.382469 systemd-networkd[1398]: lo: Link UP Jul 9 23:34:43.382479 systemd-networkd[1398]: lo: Gained carrier Jul 9 23:34:43.383622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:34:43.384020 systemd-networkd[1398]: Enumeration completed Jul 9 23:34:43.385650 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:34:43.387012 systemd[1]: Reached target network.target - Network. Jul 9 23:34:43.394139 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:34:43.394150 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:34:43.397410 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:34:43.400238 systemd-networkd[1398]: eth0: Link UP Jul 9 23:34:43.400246 systemd-networkd[1398]: eth0: Gained carrier Jul 9 23:34:43.400275 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:34:43.400638 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:34:43.402712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:34:43.413283 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:34:43.415783 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Jul 9 23:34:43.420364 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 23:34:43.420441 systemd-timesyncd[1399]: Initial clock synchronization to Wed 2025-07-09 23:34:43.532107 UTC. Jul 9 23:34:43.427405 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:34:43.451669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:34:43.462988 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 9 23:34:43.466222 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 9 23:34:43.488157 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 23:34:43.501849 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:34:43.519728 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 9 23:34:43.521371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:34:43.522560 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:34:43.523732 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:34:43.525055 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:34:43.526544 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:34:43.527736 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:34:43.529200 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:34:43.530443 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:34:43.530478 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:34:43.531384 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:34:43.533466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:34:43.536001 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:34:43.539512 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:34:43.541155 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:34:43.542439 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:34:43.547035 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:34:43.548873 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:34:43.551407 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 9 23:34:43.553184 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:34:43.554379 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:34:43.555350 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:34:43.556314 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:34:43.556348 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:34:43.557447 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:34:43.559241 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 23:34:43.560405 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:34:43.563224 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:34:43.568772 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:34:43.569873 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:34:43.571399 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:34:43.574386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:34:43.574673 jq[1453]: false Jul 9 23:34:43.577445 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:34:43.583509 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:34:43.585647 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:34:43.586274 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:34:43.587281 extend-filesystems[1454]: Found loop3 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found loop4 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found loop5 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda1 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda2 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda3 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found usr Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda4 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda6 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda7 Jul 9 23:34:43.588236 extend-filesystems[1454]: Found vda9 Jul 9 23:34:43.588236 extend-filesystems[1454]: Checking size of /dev/vda9 Jul 9 23:34:43.588052 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:34:43.593134 dbus-daemon[1452]: [system] SELinux support is enabled Jul 9 23:34:43.590591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:34:43.595828 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 9 23:34:43.597547 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:34:43.610508 jq[1462]: true Jul 9 23:34:43.602799 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:34:43.605257 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:34:43.605647 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:34:43.605827 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:34:43.609921 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:34:43.610180 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:34:43.616901 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:34:43.616951 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:34:43.618255 jq[1470]: true Jul 9 23:34:43.619436 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:34:43.619467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:34:43.633060 update_engine[1460]: I20250709 23:34:43.632877 1460 main.cc:92] Flatcar Update Engine starting Jul 9 23:34:43.634915 update_engine[1460]: I20250709 23:34:43.634858 1460 update_check_scheduler.cc:74] Next update check in 7m20s Jul 9 23:34:43.635146 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:34:43.639757 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:34:43.643800 extend-filesystems[1454]: Resized partition /dev/vda9 Jul 9 23:34:43.643994 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:34:43.657363 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Jul 9 23:34:43.666230 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Jul 9 23:34:43.667794 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 23:34:43.668367 systemd-logind[1459]: New seat seat0. Jul 9 23:34:43.671267 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:34:43.688231 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 23:34:43.713135 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:34:43.844195 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 23:34:44.054176 containerd[1484]: time="2025-07-09T23:34:44.054069028Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 9 23:34:44.066775 extend-filesystems[1491]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 23:34:44.066775 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 23:34:44.066775 extend-filesystems[1491]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 23:34:44.071546 extend-filesystems[1454]: Resized filesystem in /dev/vda9 Jul 9 23:34:44.068319 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:34:44.068968 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:34:44.082654 containerd[1484]: time="2025-07-09T23:34:44.082607706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084107117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084141999Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084158272Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084316491Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084335382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084388068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084406073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084601026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084614158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084626443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085279 containerd[1484]: time="2025-07-09T23:34:44.084635103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085521 containerd[1484]: time="2025-07-09T23:34:44.084701444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085521 containerd[1484]: time="2025-07-09T23:34:44.084893658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085521 containerd[1484]: time="2025-07-09T23:34:44.085011758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:34:44.085521 containerd[1484]: time="2025-07-09T23:34:44.085025615Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 9 23:34:44.085521 containerd[1484]: time="2025-07-09T23:34:44.085094896Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 9 23:34:44.085521 containerd[1484]: time="2025-07-09T23:34:44.085143070Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:34:44.289429 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:34:44.290964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:34:44.292991 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.409534716Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.409626393Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.409652454Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.409679602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.409704938Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.409903839Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410251775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410353441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410369513Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410383651Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410399118Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410412008Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410426750Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412228 containerd[1484]: time="2025-07-09T23:34:44.410441613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410456517Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410470575Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410483665Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410495628Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410515325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410529584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410541870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410553953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410570589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410584324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410595844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410608855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410621825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412571 containerd[1484]: time="2025-07-09T23:34:44.410635882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410648369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410660252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410672738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410689575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410713985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410727680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410739401Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410905273Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410922754Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410933227Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410945069Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410954253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410967988Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 9 23:34:44.412789 containerd[1484]: time="2025-07-09T23:34:44.410978985Z" level=info msg="NRI interface is disabled by configuration." Jul 9 23:34:44.413008 containerd[1484]: time="2025-07-09T23:34:44.410989055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 9 23:34:44.413029 containerd[1484]: time="2025-07-09T23:34:44.411341018Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 9 23:34:44.413029 containerd[1484]: time="2025-07-09T23:34:44.411389354Z" level=info msg="Connect containerd service" Jul 9 23:34:44.413029 containerd[1484]: time="2025-07-09T23:34:44.411428184Z" level=info msg="using legacy CRI server" Jul 9 23:34:44.413029 containerd[1484]: time="2025-07-09T23:34:44.411435555Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:34:44.413029 containerd[1484]: time="2025-07-09T23:34:44.411667485Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 9 23:34:44.413467 containerd[1484]: time="2025-07-09T23:34:44.413442933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:34:44.413761 containerd[1484]: time="2025-07-09T23:34:44.413650172Z" level=info msg="Start subscribing containerd event" Jul 9 23:34:44.413761 containerd[1484]: time="2025-07-09T23:34:44.413729483Z" level=info msg="Start recovering state" Jul 9 23:34:44.413889 containerd[1484]: time="2025-07-09T23:34:44.413819186Z" level=info msg="Start event monitor" Jul 9 23:34:44.413889 containerd[1484]: time="2025-07-09T23:34:44.413837191Z" level=info msg="Start snapshots syncer" Jul 9 23:34:44.413889 containerd[1484]: time="2025-07-09T23:34:44.413872878Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:34:44.413889 containerd[1484]: time="2025-07-09T23:34:44.413882626Z" level=info msg="Start streaming server" Jul 9 23:34:44.414227 containerd[1484]: time="2025-07-09T23:34:44.414205427Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:34:44.414334 containerd[1484]: time="2025-07-09T23:34:44.414320748Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:34:44.415016 containerd[1484]: time="2025-07-09T23:34:44.414982543Z" level=info msg="containerd successfully booted in 0.424205s" Jul 9 23:34:44.415075 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:34:45.161762 systemd-networkd[1398]: eth0: Gained IPv6LL Jul 9 23:34:45.167892 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:34:45.169935 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:34:45.181558 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 23:34:45.184719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:34:45.187245 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:34:45.212181 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:34:45.214058 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 23:34:45.216206 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 23:34:45.219342 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:34:45.454742 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:34:45.475571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:34:45.490095 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:34:45.499030 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:34:45.501220 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:34:45.514514 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:34:45.528734 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:34:45.538583 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:34:45.541336 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:34:45.542859 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:34:45.831448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:34:45.833401 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:34:45.838361 systemd[1]: Startup finished in 697ms (kernel) + 5.589s (initrd) + 4.532s (userspace) = 10.820s. Jul 9 23:34:45.838969 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:34:46.405623 kubelet[1559]: E0709 23:34:46.405504 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:34:46.407994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:34:46.408147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:34:46.411354 systemd[1]: kubelet.service: Consumed 806ms CPU time, 259.7M memory peak. Jul 9 23:34:49.167181 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:34:49.168461 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:52982.service - OpenSSH per-connection server daemon (10.0.0.1:52982). Jul 9 23:34:49.248854 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:HyhAdeOAFUY/kH++6PU3gv2Y9w2RKf2hhTRFj2wHrDE Jul 9 23:34:49.251483 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:34:49.269296 systemd-logind[1459]: New session 1 of user core. Jul 9 23:34:49.270588 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:34:49.292536 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:34:49.304755 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:34:49.322619 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:34:49.327313 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:34:49.332310 systemd-logind[1459]: New session c1 of user core. Jul 9 23:34:49.472334 systemd[1577]: Queued start job for default target default.target. Jul 9 23:34:49.479267 systemd[1577]: Created slice app.slice - User Application Slice. Jul 9 23:34:49.479299 systemd[1577]: Reached target paths.target - Paths. Jul 9 23:34:49.479350 systemd[1577]: Reached target timers.target - Timers. Jul 9 23:34:49.480772 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:34:49.494542 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:34:49.494668 systemd[1577]: Reached target sockets.target - Sockets. Jul 9 23:34:49.494714 systemd[1577]: Reached target basic.target - Basic System. Jul 9 23:34:49.494745 systemd[1577]: Reached target default.target - Main User Target. Jul 9 23:34:49.494770 systemd[1577]: Startup finished in 153ms. Jul 9 23:34:49.494911 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:34:49.496440 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:34:49.582655 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Jul 9 23:34:49.653072 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:HyhAdeOAFUY/kH++6PU3gv2Y9w2RKf2hhTRFj2wHrDE Jul 9 23:34:49.654025 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:34:49.659919 systemd-logind[1459]: New session 2 of user core. Jul 9 23:34:49.671511 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:34:49.725702 sshd[1590]: Connection closed by 10.0.0.1 port 52988 Jul 9 23:34:49.726088 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Jul 9 23:34:49.746893 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:52988.service: Deactivated successfully. Jul 9 23:34:49.749815 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 23:34:49.750609 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jul 9 23:34:49.762773 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:53004.service - OpenSSH per-connection server daemon (10.0.0.1:53004). Jul 9 23:34:49.763953 systemd-logind[1459]: Removed session 2. Jul 9 23:34:49.803619 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 53004 ssh2: RSA SHA256:HyhAdeOAFUY/kH++6PU3gv2Y9w2RKf2hhTRFj2wHrDE Jul 9 23:34:49.805875 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:34:49.814354 systemd-logind[1459]: New session 3 of user core. Jul 9 23:34:49.824426 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:34:49.872252 sshd[1598]: Connection closed by 10.0.0.1 port 53004 Jul 9 23:34:49.872738 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Jul 9 23:34:49.884475 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:53004.service: Deactivated successfully. Jul 9 23:34:49.886379 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 23:34:49.888026 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jul 9 23:34:49.899577 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:53008.service - OpenSSH per-connection server daemon (10.0.0.1:53008). Jul 9 23:34:49.900716 systemd-logind[1459]: Removed session 3. Jul 9 23:34:49.939918 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 53008 ssh2: RSA SHA256:HyhAdeOAFUY/kH++6PU3gv2Y9w2RKf2hhTRFj2wHrDE Jul 9 23:34:49.941539 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:34:49.946700 systemd-logind[1459]: New session 4 of user core. Jul 9 23:34:49.958391 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:34:50.021958 sshd[1606]: Connection closed by 10.0.0.1 port 53008 Jul 9 23:34:50.023030 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Jul 9 23:34:50.039749 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:53008.service: Deactivated successfully. Jul 9 23:34:50.041785 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:34:50.042601 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:34:50.054697 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:53024.service - OpenSSH per-connection server daemon (10.0.0.1:53024). Jul 9 23:34:50.056425 systemd-logind[1459]: Removed session 4. Jul 9 23:34:50.099250 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 53024 ssh2: RSA SHA256:HyhAdeOAFUY/kH++6PU3gv2Y9w2RKf2hhTRFj2wHrDE Jul 9 23:34:50.100713 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:34:50.105252 systemd-logind[1459]: New session 5 of user core. Jul 9 23:34:50.112444 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:34:50.186311 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:34:50.186605 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:34:50.208383 sudo[1615]: pam_unix(sudo:session): session closed for user root Jul 9 23:34:50.211722 sshd[1614]: Connection closed by 10.0.0.1 port 53024 Jul 9 23:34:50.212326 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Jul 9 23:34:50.221991 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:53024.service: Deactivated successfully. Jul 9 23:34:50.224160 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:34:50.225627 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:34:50.237563 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:53028.service - OpenSSH per-connection server daemon (10.0.0.1:53028). Jul 9 23:34:50.238527 systemd-logind[1459]: Removed session 5. Jul 9 23:34:50.301482 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 53028 ssh2: RSA SHA256:HyhAdeOAFUY/kH++6PU3gv2Y9w2RKf2hhTRFj2wHrDE Jul 9 23:34:50.302703 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:34:50.312652 systemd-logind[1459]: New session 6 of user core. Jul 9 23:34:50.333525 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:34:50.391415 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:34:50.392026 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:34:50.395769 sudo[1625]: pam_unix(sudo:session): session closed for user root Jul 9 23:34:50.401358 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:34:50.401646 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:34:50.420605 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:34:50.449002 augenrules[1647]: No rules Jul 9 23:34:50.450556 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:34:50.450792 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:34:50.452249 sudo[1624]: pam_unix(sudo:session): session closed for user root Jul 9 23:34:50.455296 sshd[1623]: Connection closed by 10.0.0.1 port 53028 Jul 9 23:34:50.454318 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jul 9 23:34:50.469495 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:53028.service: Deactivated successfully. Jul 9 23:34:50.471672 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:34:50.473111 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:34:50.484555 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:53044.service - OpenSSH per-connection server daemon (10.0.0.1:53044). Jul 9 23:34:50.485149 systemd-logind[1459]: Removed session 6. Jul 9 23:34:50.526750 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 53044 ssh2: RSA SHA256:HyhAdeOAFUY/kH++6PU3gv2Y9w2RKf2hhTRFj2wHrDE Jul 9 23:34:50.528145 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:34:50.533619 systemd-logind[1459]: New session 7 of user core. Jul 9 23:34:50.546736 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:34:50.599907 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:34:50.601332 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:34:50.630955 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 23:34:50.648413 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 23:34:50.649275 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 23:34:51.205088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:34:51.205265 systemd[1]: kubelet.service: Consumed 806ms CPU time, 259.7M memory peak. Jul 9 23:34:51.219488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:34:51.247417 systemd[1]: Reload requested from client PID 1701 ('systemctl') (unit session-7.scope)... Jul 9 23:34:51.248213 systemd[1]: Reloading... Jul 9 23:34:51.324345 zram_generator::config[1744]: No configuration found. Jul 9 23:34:51.775104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:34:51.863679 systemd[1]: Reloading finished in 614 ms. Jul 9 23:34:51.911044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:34:51.914210 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:34:51.914425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:34:51.914477 systemd[1]: kubelet.service: Consumed 102ms CPU time, 95.1M memory peak. Jul 9 23:34:51.918074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:34:52.038090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:34:52.053586 (kubelet)[1791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:34:52.113207 kubelet[1791]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:34:52.113207 kubelet[1791]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:34:52.113207 kubelet[1791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:34:52.113579 kubelet[1791]: I0709 23:34:52.113265 1791 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:34:53.261737 kubelet[1791]: I0709 23:34:53.261682 1791 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 23:34:53.261737 kubelet[1791]: I0709 23:34:53.261722 1791 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:34:53.262221 kubelet[1791]: I0709 23:34:53.262004 1791 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 23:34:53.326292 kubelet[1791]: I0709 23:34:53.326250 1791 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:34:53.346811 kubelet[1791]: E0709 23:34:53.346751 1791 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 23:34:53.346811 kubelet[1791]: I0709 23:34:53.346807 1791 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 23:34:53.350857 kubelet[1791]: I0709 23:34:53.350832 1791 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:34:53.351148 kubelet[1791]: I0709 23:34:53.351097 1791 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:34:53.351346 kubelet[1791]: I0709 23:34:53.351138 1791 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:34:53.351504 kubelet[1791]: I0709 23:34:53.351492 1791 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:34:53.351504 kubelet[1791]: I0709 23:34:53.351504 1791 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 23:34:53.351855 kubelet[1791]: I0709 23:34:53.351829 1791 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:34:53.354955 kubelet[1791]: I0709 23:34:53.354795 1791 kubelet.go:446] "Attempting to sync node with API server" Jul 9 23:34:53.354955 kubelet[1791]: I0709 23:34:53.354831 1791 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:34:53.354955 kubelet[1791]: I0709 23:34:53.354855 1791 kubelet.go:352] "Adding apiserver pod source" Jul 9 23:34:53.354955 kubelet[1791]: I0709 23:34:53.354867 1791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:34:53.355124 kubelet[1791]: E0709 23:34:53.354991 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:53.355124 kubelet[1791]: E0709 23:34:53.355075 1791 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:53.359192 kubelet[1791]: I0709 23:34:53.359156 1791 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 23:34:53.360076 kubelet[1791]: I0709 23:34:53.360060 1791 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:34:53.363800 kubelet[1791]: W0709 23:34:53.363771 1791 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:34:53.365272 kubelet[1791]: I0709 23:34:53.365241 1791 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:34:53.365339 kubelet[1791]: I0709 23:34:53.365289 1791 server.go:1287] "Started kubelet" Jul 9 23:34:53.368207 kubelet[1791]: I0709 23:34:53.365415 1791 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:34:53.368207 kubelet[1791]: W0709 23:34:53.365938 1791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 9 23:34:53.368207 kubelet[1791]: E0709 23:34:53.365974 1791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.19\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 9 23:34:53.368207 kubelet[1791]: I0709 23:34:53.366388 1791 server.go:479] "Adding debug handlers to kubelet server" Jul 9 23:34:53.368207 kubelet[1791]: I0709 23:34:53.367116 1791 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:34:53.368207 kubelet[1791]: I0709 23:34:53.367512 1791 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:34:53.368476 kubelet[1791]: W0709 23:34:53.368443 1791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 9 23:34:53.368513 kubelet[1791]: E0709 23:34:53.368483 1791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 9 23:34:53.369139 kubelet[1791]: I0709 23:34:53.369114 1791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:34:53.369310 kubelet[1791]: I0709 23:34:53.369281 1791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:34:53.369680 kubelet[1791]: I0709 23:34:53.369632 1791 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:34:53.370906 kubelet[1791]: I0709 23:34:53.370863 1791 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:34:53.370982 kubelet[1791]: I0709 23:34:53.370935 1791 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:34:53.375673 kubelet[1791]: E0709 23:34:53.375623 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:53.383393 kubelet[1791]: I0709 23:34:53.378247 1791 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:34:53.383393 kubelet[1791]: I0709 23:34:53.378342 1791 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:34:53.383393 kubelet[1791]: E0709 23:34:53.379422 1791 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:34:53.383393 kubelet[1791]: E0709 23:34:53.380127 1791 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 9 23:34:53.383393 kubelet[1791]: I0709 23:34:53.380769 1791 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:34:53.383393 kubelet[1791]: E0709 23:34:53.380190 1791 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.19.1850b963ce8b58b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.19,UID:10.0.0.19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.19,},FirstTimestamp:2025-07-09 23:34:53.365262514 +0000 UTC m=+1.307454640,LastTimestamp:2025-07-09 23:34:53.365262514 +0000 UTC m=+1.307454640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.19,}" Jul 9 23:34:53.383393 kubelet[1791]: W0709 23:34:53.381485 1791 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 9 23:34:53.383699 kubelet[1791]: E0709 23:34:53.381535 1791 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jul 9 23:34:53.392487 kubelet[1791]: E0709 23:34:53.391384 1791 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.19.1850b963cf62d411 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.19,UID:10.0.0.19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.19,},FirstTimestamp:2025-07-09 23:34:53.379384337 +0000 UTC m=+1.321576463,LastTimestamp:2025-07-09 23:34:53.379384337 +0000 UTC m=+1.321576463,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.19,}" Jul 9 23:34:53.392487 kubelet[1791]: I0709 23:34:53.392314 1791 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:34:53.392487 kubelet[1791]: I0709 23:34:53.392329 1791 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:34:53.392487 kubelet[1791]: I0709 23:34:53.392349 1791 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:34:53.396861 kubelet[1791]: E0709 23:34:53.396740 1791 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.19.1850b963d00537ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.19,UID:10.0.0.19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.19 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.19,},FirstTimestamp:2025-07-09 23:34:53.390026682 +0000 UTC m=+1.332218808,LastTimestamp:2025-07-09 23:34:53.390026682 +0000 UTC m=+1.332218808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.19,}" Jul 9 23:34:53.403875 kubelet[1791]: E0709 23:34:53.403760 1791 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.19.1850b963d009dab2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.19,UID:10.0.0.19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.19 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.19,},FirstTimestamp:2025-07-09 23:34:53.390330546 +0000 UTC m=+1.332522672,LastTimestamp:2025-07-09 23:34:53.390330546 +0000 UTC m=+1.332522672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.19,}" Jul 9 23:34:53.411503 kubelet[1791]: E0709 23:34:53.411409 1791 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.19.1850b963d009f37b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.19,UID:10.0.0.19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.19 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.19,},FirstTimestamp:2025-07-09 23:34:53.390336891 +0000 UTC m=+1.332528977,LastTimestamp:2025-07-09 23:34:53.390336891 +0000 UTC m=+1.332528977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.19,}" Jul 9 23:34:53.475819 kubelet[1791]: E0709 23:34:53.475743 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:53.576468 kubelet[1791]: E0709 23:34:53.576344 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:53.587056 kubelet[1791]: I0709 23:34:53.587012 1791 policy_none.go:49] "None policy: Start" Jul 9 23:34:53.587056 kubelet[1791]: I0709 23:34:53.587047 1791 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:34:53.587056 kubelet[1791]: I0709 23:34:53.587062 1791 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:34:53.595480 kubelet[1791]: E0709 23:34:53.595192 1791 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.19\" not found" node="10.0.0.19" Jul 9 23:34:53.596270 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:34:53.613281 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:34:53.617388 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:34:53.634219 kubelet[1791]: I0709 23:34:53.633406 1791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:34:53.634219 kubelet[1791]: I0709 23:34:53.633656 1791 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:34:53.634219 kubelet[1791]: I0709 23:34:53.633872 1791 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:34:53.634219 kubelet[1791]: I0709 23:34:53.633885 1791 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:34:53.634576 kubelet[1791]: I0709 23:34:53.634544 1791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:34:53.634576 kubelet[1791]: I0709 23:34:53.634570 1791 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 23:34:53.634640 kubelet[1791]: I0709 23:34:53.634589 1791 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:34:53.634640 kubelet[1791]: I0709 23:34:53.634600 1791 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 23:34:53.634685 kubelet[1791]: E0709 23:34:53.634645 1791 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 9 23:34:53.637142 kubelet[1791]: I0709 23:34:53.637117 1791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:34:53.638979 kubelet[1791]: E0709 23:34:53.638762 1791 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:34:53.638979 kubelet[1791]: E0709 23:34:53.638809 1791 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.19\" not found" Jul 9 23:34:53.735633 kubelet[1791]: I0709 23:34:53.735595 1791 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.19" Jul 9 23:34:53.743505 kubelet[1791]: I0709 23:34:53.743420 1791 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.19" Jul 9 23:34:53.743505 kubelet[1791]: E0709 23:34:53.743458 1791 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.19\": node \"10.0.0.19\" not found" Jul 9 23:34:53.771639 kubelet[1791]: E0709 23:34:53.771605 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:53.872467 kubelet[1791]: E0709 23:34:53.872338 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:53.973039 kubelet[1791]: E0709 23:34:53.973001 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:54.073901 kubelet[1791]: E0709 23:34:54.073825 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:54.114482 sudo[1659]: pam_unix(sudo:session): session closed for user root Jul 9 23:34:54.115910 sshd[1658]: Connection closed by 10.0.0.1 port 53044 Jul 9 23:34:54.116373 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Jul 9 23:34:54.120900 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:34:54.121217 systemd[1]: session-7.scope: Consumed 479ms CPU time, 75.6M memory peak. Jul 9 23:34:54.122373 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:53044.service: Deactivated successfully. Jul 9 23:34:54.124679 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:34:54.126051 systemd-logind[1459]: Removed session 7. Jul 9 23:34:54.174587 kubelet[1791]: E0709 23:34:54.174544 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:54.264333 kubelet[1791]: I0709 23:34:54.264301 1791 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 9 23:34:54.264680 kubelet[1791]: W0709 23:34:54.264485 1791 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 9 23:34:54.275666 kubelet[1791]: E0709 23:34:54.275630 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:54.355445 kubelet[1791]: E0709 23:34:54.355379 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:54.376195 kubelet[1791]: E0709 23:34:54.376025 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:54.476234 kubelet[1791]: E0709 23:34:54.476188 1791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" Jul 9 23:34:54.577328 kubelet[1791]: I0709 23:34:54.577286 1791 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 9 23:34:54.579422 containerd[1484]: time="2025-07-09T23:34:54.579371997Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:34:54.579741 kubelet[1791]: I0709 23:34:54.579612 1791 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 9 23:34:55.356278 kubelet[1791]: I0709 23:34:55.356242 1791 apiserver.go:52] "Watching apiserver" Jul 9 23:34:55.356636 kubelet[1791]: E0709 23:34:55.356308 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:55.380257 kubelet[1791]: E0709 23:34:55.380203 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:34:55.388536 systemd[1]: Created slice kubepods-besteffort-podffdcc519_6a3b_43d8_adc3_0ad77694a378.slice - libcontainer container kubepods-besteffort-podffdcc519_6a3b_43d8_adc3_0ad77694a378.slice. Jul 9 23:34:55.412338 systemd[1]: Created slice kubepods-besteffort-pod9962dd95_7b27_4985_a7a8_eb234850fec6.slice - libcontainer container kubepods-besteffort-pod9962dd95_7b27_4985_a7a8_eb234850fec6.slice. Jul 9 23:34:55.471866 kubelet[1791]: I0709 23:34:55.471796 1791 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:34:55.481052 kubelet[1791]: I0709 23:34:55.480985 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffdcc519-6a3b-43d8-adc3-0ad77694a378-xtables-lock\") pod \"kube-proxy-fg9dt\" (UID: \"ffdcc519-6a3b-43d8-adc3-0ad77694a378\") " pod="kube-system/kube-proxy-fg9dt" Jul 9 23:34:55.481052 kubelet[1791]: I0709 23:34:55.481030 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-cni-bin-dir\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481052 kubelet[1791]: I0709 23:34:55.481052 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9962dd95-7b27-4985-a7a8-eb234850fec6-tigera-ca-bundle\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481052 kubelet[1791]: I0709 23:34:55.481068 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbw97\" (UniqueName: \"kubernetes.io/projected/9962dd95-7b27-4985-a7a8-eb234850fec6-kube-api-access-dbw97\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481270 kubelet[1791]: I0709 23:34:55.481091 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72-registration-dir\") pod \"csi-node-driver-j68cs\" (UID: \"9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72\") " pod="calico-system/csi-node-driver-j68cs" Jul 9 23:34:55.481270 kubelet[1791]: I0709 23:34:55.481106 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72-varrun\") pod \"csi-node-driver-j68cs\" (UID: \"9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72\") " pod="calico-system/csi-node-driver-j68cs" Jul 9 23:34:55.481270 kubelet[1791]: I0709 23:34:55.481122 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gbg\" (UniqueName: \"kubernetes.io/projected/9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72-kube-api-access-h5gbg\") pod \"csi-node-driver-j68cs\" (UID: \"9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72\") " pod="calico-system/csi-node-driver-j68cs" Jul 9 23:34:55.481270 kubelet[1791]: I0709 23:34:55.481137 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffdcc519-6a3b-43d8-adc3-0ad77694a378-kube-proxy\") pod \"kube-proxy-fg9dt\" (UID: \"ffdcc519-6a3b-43d8-adc3-0ad77694a378\") " pod="kube-system/kube-proxy-fg9dt" Jul 9 23:34:55.481270 kubelet[1791]: I0709 23:34:55.481156 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-flexvol-driver-host\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481372 kubelet[1791]: I0709 23:34:55.481219 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-var-lib-calico\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481372 kubelet[1791]: I0709 23:34:55.481277 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72-kubelet-dir\") pod \"csi-node-driver-j68cs\" (UID: \"9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72\") " pod="calico-system/csi-node-driver-j68cs" Jul 9 23:34:55.481372 kubelet[1791]: I0709 23:34:55.481296 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqhn\" (UniqueName: \"kubernetes.io/projected/ffdcc519-6a3b-43d8-adc3-0ad77694a378-kube-api-access-frqhn\") pod \"kube-proxy-fg9dt\" (UID: \"ffdcc519-6a3b-43d8-adc3-0ad77694a378\") " pod="kube-system/kube-proxy-fg9dt" Jul 9 23:34:55.481372 kubelet[1791]: I0709 23:34:55.481331 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9962dd95-7b27-4985-a7a8-eb234850fec6-node-certs\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481451 kubelet[1791]: I0709 23:34:55.481378 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-var-run-calico\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481451 kubelet[1791]: I0709 23:34:55.481407 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-xtables-lock\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481451 kubelet[1791]: I0709 23:34:55.481425 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffdcc519-6a3b-43d8-adc3-0ad77694a378-lib-modules\") pod \"kube-proxy-fg9dt\" (UID: \"ffdcc519-6a3b-43d8-adc3-0ad77694a378\") " pod="kube-system/kube-proxy-fg9dt" Jul 9 23:34:55.481524 kubelet[1791]: I0709 23:34:55.481470 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-cni-log-dir\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481524 kubelet[1791]: I0709 23:34:55.481488 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-cni-net-dir\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481524 kubelet[1791]: I0709 23:34:55.481502 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-lib-modules\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481660 kubelet[1791]: I0709 23:34:55.481535 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9962dd95-7b27-4985-a7a8-eb234850fec6-policysync\") pod \"calico-node-m5mc2\" (UID: \"9962dd95-7b27-4985-a7a8-eb234850fec6\") " pod="calico-system/calico-node-m5mc2" Jul 9 23:34:55.481660 kubelet[1791]: I0709 23:34:55.481551 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72-socket-dir\") pod \"csi-node-driver-j68cs\" (UID: \"9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72\") " pod="calico-system/csi-node-driver-j68cs" Jul 9 23:34:55.584300 kubelet[1791]: E0709 23:34:55.584033 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.584300 kubelet[1791]: W0709 23:34:55.584059 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.584300 kubelet[1791]: E0709 23:34:55.584086 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.584300 kubelet[1791]: E0709 23:34:55.584273 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.584300 kubelet[1791]: W0709 23:34:55.584282 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.584300 kubelet[1791]: E0709 23:34:55.584297 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.584658 kubelet[1791]: E0709 23:34:55.584484 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.584658 kubelet[1791]: W0709 23:34:55.584503 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.584658 kubelet[1791]: E0709 23:34:55.584523 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.584752 kubelet[1791]: E0709 23:34:55.584745 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.584788 kubelet[1791]: W0709 23:34:55.584754 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.584788 kubelet[1791]: E0709 23:34:55.584769 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.588611 kubelet[1791]: E0709 23:34:55.588313 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.588611 kubelet[1791]: W0709 23:34:55.588336 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.588611 kubelet[1791]: E0709 23:34:55.588356 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.588813 kubelet[1791]: E0709 23:34:55.588792 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.588813 kubelet[1791]: W0709 23:34:55.588810 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.588909 kubelet[1791]: E0709 23:34:55.588832 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.618262 kubelet[1791]: E0709 23:34:55.618135 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.618262 kubelet[1791]: W0709 23:34:55.618178 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.618262 kubelet[1791]: E0709 23:34:55.618204 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.620650 kubelet[1791]: E0709 23:34:55.620471 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.620650 kubelet[1791]: W0709 23:34:55.620494 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.620650 kubelet[1791]: E0709 23:34:55.620515 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.620877 kubelet[1791]: E0709 23:34:55.620865 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:55.620980 kubelet[1791]: W0709 23:34:55.620927 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:55.620980 kubelet[1791]: E0709 23:34:55.620954 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:55.713683 containerd[1484]: time="2025-07-09T23:34:55.713639579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fg9dt,Uid:ffdcc519-6a3b-43d8-adc3-0ad77694a378,Namespace:kube-system,Attempt:0,}" Jul 9 23:34:55.716202 containerd[1484]: time="2025-07-09T23:34:55.716129510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m5mc2,Uid:9962dd95-7b27-4985-a7a8-eb234850fec6,Namespace:calico-system,Attempt:0,}" Jul 9 23:34:56.356540 kubelet[1791]: E0709 23:34:56.356477 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:56.774592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991625657.mount: Deactivated successfully. Jul 9 23:34:56.803612 containerd[1484]: time="2025-07-09T23:34:56.803551060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:34:56.812350 containerd[1484]: time="2025-07-09T23:34:56.812240336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 9 23:34:56.828915 containerd[1484]: time="2025-07-09T23:34:56.828824093Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:34:56.834554 containerd[1484]: time="2025-07-09T23:34:56.834470874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:34:56.838619 containerd[1484]: time="2025-07-09T23:34:56.838486094Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 23:34:56.843007 containerd[1484]: time="2025-07-09T23:34:56.842683458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:34:56.843712 containerd[1484]: time="2025-07-09T23:34:56.843666376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.129932757s" Jul 9 23:34:56.854884 containerd[1484]: time="2025-07-09T23:34:56.854825386Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.138587386s" Jul 9 23:34:56.999604 containerd[1484]: time="2025-07-09T23:34:56.999454386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:34:56.999604 containerd[1484]: time="2025-07-09T23:34:56.999564981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:34:56.999604 containerd[1484]: time="2025-07-09T23:34:56.999583521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:34:56.999789 containerd[1484]: time="2025-07-09T23:34:56.999682559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:34:57.013551 containerd[1484]: time="2025-07-09T23:34:57.013271993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:34:57.013551 containerd[1484]: time="2025-07-09T23:34:57.013333779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:34:57.013551 containerd[1484]: time="2025-07-09T23:34:57.013345173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:34:57.013551 containerd[1484]: time="2025-07-09T23:34:57.013427260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:34:57.103494 systemd[1]: Started cri-containerd-1610ff3647dca4d7df344f71d184b8b2f0353bb974d14c1d904377092c638adb.scope - libcontainer container 1610ff3647dca4d7df344f71d184b8b2f0353bb974d14c1d904377092c638adb. Jul 9 23:34:57.107497 systemd[1]: Started cri-containerd-b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409.scope - libcontainer container b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409. Jul 9 23:34:57.137281 containerd[1484]: time="2025-07-09T23:34:57.135545896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fg9dt,Uid:ffdcc519-6a3b-43d8-adc3-0ad77694a378,Namespace:kube-system,Attempt:0,} returns sandbox id \"1610ff3647dca4d7df344f71d184b8b2f0353bb974d14c1d904377092c638adb\"" Jul 9 23:34:57.138547 containerd[1484]: time="2025-07-09T23:34:57.138480537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 23:34:57.141379 containerd[1484]: time="2025-07-09T23:34:57.141304885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m5mc2,Uid:9962dd95-7b27-4985-a7a8-eb234850fec6,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409\"" Jul 9 23:34:57.357540 kubelet[1791]: E0709 23:34:57.357404 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:57.636828 kubelet[1791]: E0709 23:34:57.636488 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:34:58.259307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514518162.mount: Deactivated successfully. Jul 9 23:34:58.357974 kubelet[1791]: E0709 23:34:58.357845 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:58.514704 containerd[1484]: time="2025-07-09T23:34:58.513797282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:34:58.515033 containerd[1484]: time="2025-07-09T23:34:58.514829558Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 9 23:34:58.515892 containerd[1484]: time="2025-07-09T23:34:58.515841496Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:34:58.520174 containerd[1484]: time="2025-07-09T23:34:58.520037950Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.381509111s" Jul 9 23:34:58.520174 containerd[1484]: time="2025-07-09T23:34:58.520099323Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 9 23:34:58.520661 containerd[1484]: time="2025-07-09T23:34:58.520341928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:34:58.522054 containerd[1484]: time="2025-07-09T23:34:58.522014092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 23:34:58.522811 containerd[1484]: time="2025-07-09T23:34:58.522770629Z" level=info msg="CreateContainer within sandbox \"1610ff3647dca4d7df344f71d184b8b2f0353bb974d14c1d904377092c638adb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:34:58.541680 containerd[1484]: time="2025-07-09T23:34:58.541613173Z" level=info msg="CreateContainer within sandbox \"1610ff3647dca4d7df344f71d184b8b2f0353bb974d14c1d904377092c638adb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"75e2147a064744023c7fcb5d0fa3ef80037073e67c62bea2093a5964fee5a19a\"" Jul 9 23:34:58.542558 containerd[1484]: time="2025-07-09T23:34:58.542523465Z" level=info msg="StartContainer for \"75e2147a064744023c7fcb5d0fa3ef80037073e67c62bea2093a5964fee5a19a\"" Jul 9 23:34:58.571408 systemd[1]: Started cri-containerd-75e2147a064744023c7fcb5d0fa3ef80037073e67c62bea2093a5964fee5a19a.scope - libcontainer container 75e2147a064744023c7fcb5d0fa3ef80037073e67c62bea2093a5964fee5a19a. Jul 9 23:34:58.603299 containerd[1484]: time="2025-07-09T23:34:58.603088302Z" level=info msg="StartContainer for \"75e2147a064744023c7fcb5d0fa3ef80037073e67c62bea2093a5964fee5a19a\" returns successfully" Jul 9 23:34:58.676556 kubelet[1791]: I0709 23:34:58.676478 1791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fg9dt" podStartSLOduration=4.292927287 podStartE2EDuration="5.67646152s" podCreationTimestamp="2025-07-09 23:34:53 +0000 UTC" firstStartedPulling="2025-07-09 23:34:57.137649634 +0000 UTC m=+5.079841760" lastFinishedPulling="2025-07-09 23:34:58.521183907 +0000 UTC m=+6.463375993" observedRunningTime="2025-07-09 23:34:58.676282815 +0000 UTC m=+6.618474941" watchObservedRunningTime="2025-07-09 23:34:58.67646152 +0000 UTC m=+6.618653646" Jul 9 23:34:58.702904 kubelet[1791]: E0709 23:34:58.702756 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.702904 kubelet[1791]: W0709 23:34:58.702785 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.702904 kubelet[1791]: E0709 23:34:58.702817 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.703450 kubelet[1791]: E0709 23:34:58.703273 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.703450 kubelet[1791]: W0709 23:34:58.703290 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.703450 kubelet[1791]: E0709 23:34:58.703339 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.703838 kubelet[1791]: E0709 23:34:58.703594 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.703838 kubelet[1791]: W0709 23:34:58.703607 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.703838 kubelet[1791]: E0709 23:34:58.703618 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.703998 kubelet[1791]: E0709 23:34:58.703986 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.704183 kubelet[1791]: W0709 23:34:58.704099 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.704183 kubelet[1791]: E0709 23:34:58.704114 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.704452 kubelet[1791]: E0709 23:34:58.704429 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.704528 kubelet[1791]: W0709 23:34:58.704515 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.704618 kubelet[1791]: E0709 23:34:58.704570 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.704962 kubelet[1791]: E0709 23:34:58.704898 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.704962 kubelet[1791]: W0709 23:34:58.704912 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.704962 kubelet[1791]: E0709 23:34:58.704922 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.705332 kubelet[1791]: E0709 23:34:58.705217 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.705332 kubelet[1791]: W0709 23:34:58.705230 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.705332 kubelet[1791]: E0709 23:34:58.705247 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.705503 kubelet[1791]: E0709 23:34:58.705491 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.705569 kubelet[1791]: W0709 23:34:58.705558 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.705711 kubelet[1791]: E0709 23:34:58.705636 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.705971 kubelet[1791]: E0709 23:34:58.705958 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.706116 kubelet[1791]: W0709 23:34:58.706044 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.706116 kubelet[1791]: E0709 23:34:58.706061 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.706349 kubelet[1791]: E0709 23:34:58.706331 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.706510 kubelet[1791]: W0709 23:34:58.706401 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.706510 kubelet[1791]: E0709 23:34:58.706415 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.706701 kubelet[1791]: E0709 23:34:58.706689 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.706754 kubelet[1791]: W0709 23:34:58.706745 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.706818 kubelet[1791]: E0709 23:34:58.706807 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.707072 kubelet[1791]: E0709 23:34:58.707033 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.707072 kubelet[1791]: W0709 23:34:58.707045 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.707072 kubelet[1791]: E0709 23:34:58.707054 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.707526 kubelet[1791]: E0709 23:34:58.707422 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.707526 kubelet[1791]: W0709 23:34:58.707434 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.707526 kubelet[1791]: E0709 23:34:58.707444 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.707717 kubelet[1791]: E0709 23:34:58.707705 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.707892 kubelet[1791]: W0709 23:34:58.707795 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.707892 kubelet[1791]: E0709 23:34:58.707810 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.708066 kubelet[1791]: E0709 23:34:58.708054 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.708131 kubelet[1791]: W0709 23:34:58.708119 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.708230 kubelet[1791]: E0709 23:34:58.708217 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.708501 kubelet[1791]: E0709 23:34:58.708449 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.708501 kubelet[1791]: W0709 23:34:58.708471 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.708501 kubelet[1791]: E0709 23:34:58.708482 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.708886 kubelet[1791]: E0709 23:34:58.708794 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.708886 kubelet[1791]: W0709 23:34:58.708805 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.708886 kubelet[1791]: E0709 23:34:58.708815 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.709558 kubelet[1791]: E0709 23:34:58.709186 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.709558 kubelet[1791]: W0709 23:34:58.709209 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.709558 kubelet[1791]: E0709 23:34:58.709221 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.710253 kubelet[1791]: E0709 23:34:58.710148 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.710253 kubelet[1791]: W0709 23:34:58.710186 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.710253 kubelet[1791]: E0709 23:34:58.710201 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.710701 kubelet[1791]: E0709 23:34:58.710686 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.710910 kubelet[1791]: W0709 23:34:58.710787 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.710910 kubelet[1791]: E0709 23:34:58.710807 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.711324 kubelet[1791]: E0709 23:34:58.711308 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.711430 kubelet[1791]: W0709 23:34:58.711415 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.711610 kubelet[1791]: E0709 23:34:58.711505 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.711893 kubelet[1791]: E0709 23:34:58.711880 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.711956 kubelet[1791]: W0709 23:34:58.711944 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.712128 kubelet[1791]: E0709 23:34:58.712013 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.712266 kubelet[1791]: E0709 23:34:58.712253 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.712366 kubelet[1791]: W0709 23:34:58.712353 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.712431 kubelet[1791]: E0709 23:34:58.712420 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.712726 kubelet[1791]: E0709 23:34:58.712706 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.712781 kubelet[1791]: W0709 23:34:58.712727 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.712781 kubelet[1791]: E0709 23:34:58.712749 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.712925 kubelet[1791]: E0709 23:34:58.712914 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.712925 kubelet[1791]: W0709 23:34:58.712925 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.713003 kubelet[1791]: E0709 23:34:58.712939 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.713099 kubelet[1791]: E0709 23:34:58.713089 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.713099 kubelet[1791]: W0709 23:34:58.713099 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.713244 kubelet[1791]: E0709 23:34:58.713112 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.713328 kubelet[1791]: E0709 23:34:58.713317 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.713328 kubelet[1791]: W0709 23:34:58.713328 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.713395 kubelet[1791]: E0709 23:34:58.713341 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.713761 kubelet[1791]: E0709 23:34:58.713651 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.713761 kubelet[1791]: W0709 23:34:58.713666 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.713761 kubelet[1791]: E0709 23:34:58.713686 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.714114 kubelet[1791]: E0709 23:34:58.714010 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.714114 kubelet[1791]: W0709 23:34:58.714022 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.714114 kubelet[1791]: E0709 23:34:58.714043 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.714641 kubelet[1791]: E0709 23:34:58.714498 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.714641 kubelet[1791]: W0709 23:34:58.714515 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.714641 kubelet[1791]: E0709 23:34:58.714535 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.714949 kubelet[1791]: E0709 23:34:58.714777 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.714949 kubelet[1791]: W0709 23:34:58.714794 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.714949 kubelet[1791]: E0709 23:34:58.714808 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:58.715120 kubelet[1791]: E0709 23:34:58.715106 1791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:34:58.715193 kubelet[1791]: W0709 23:34:58.715162 1791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:34:58.715289 kubelet[1791]: E0709 23:34:58.715277 1791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:34:59.358997 kubelet[1791]: E0709 23:34:59.358954 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:34:59.391023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176644918.mount: Deactivated successfully. Jul 9 23:34:59.456682 containerd[1484]: time="2025-07-09T23:34:59.456607582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:34:59.457589 containerd[1484]: time="2025-07-09T23:34:59.457411712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 9 23:34:59.458528 containerd[1484]: time="2025-07-09T23:34:59.458484232Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:34:59.461725 containerd[1484]: time="2025-07-09T23:34:59.461200306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:34:59.463488 containerd[1484]: time="2025-07-09T23:34:59.463442244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 941.381702ms" Jul 9 23:34:59.463695 containerd[1484]: time="2025-07-09T23:34:59.463676224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 9 23:34:59.466523 containerd[1484]: time="2025-07-09T23:34:59.466343007Z" level=info msg="CreateContainer within sandbox \"b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 23:34:59.485626 containerd[1484]: time="2025-07-09T23:34:59.485576389Z" level=info msg="CreateContainer within sandbox \"b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c\"" Jul 9 23:34:59.487897 containerd[1484]: time="2025-07-09T23:34:59.486214519Z" level=info msg="StartContainer for \"3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c\"" Jul 9 23:34:59.511403 systemd[1]: Started cri-containerd-3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c.scope - libcontainer container 3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c. Jul 9 23:34:59.545290 containerd[1484]: time="2025-07-09T23:34:59.545134336Z" level=info msg="StartContainer for \"3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c\" returns successfully" Jul 9 23:34:59.581178 systemd[1]: cri-containerd-3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c.scope: Deactivated successfully. Jul 9 23:34:59.636599 kubelet[1791]: E0709 23:34:59.635243 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:34:59.787409 containerd[1484]: time="2025-07-09T23:34:59.787127244Z" level=info msg="shim disconnected" id=3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c namespace=k8s.io Jul 9 23:34:59.787409 containerd[1484]: time="2025-07-09T23:34:59.787207898Z" level=warning msg="cleaning up after shim disconnected" id=3311c8f766f6db8c74f63341a5145e9173af79df6a802dece57d64b5a871967c namespace=k8s.io Jul 9 23:34:59.787409 containerd[1484]: time="2025-07-09T23:34:59.787218205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:35:00.360138 kubelet[1791]: E0709 23:35:00.360094 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:00.662949 containerd[1484]: time="2025-07-09T23:35:00.662715196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 23:35:01.361082 kubelet[1791]: E0709 23:35:01.361034 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:01.636108 kubelet[1791]: E0709 23:35:01.635673 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:35:02.361362 kubelet[1791]: E0709 23:35:02.361298 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:02.932138 containerd[1484]: time="2025-07-09T23:35:02.932057263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:02.933637 containerd[1484]: time="2025-07-09T23:35:02.933591974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 9 23:35:02.935792 containerd[1484]: time="2025-07-09T23:35:02.935749524Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:02.941454 containerd[1484]: time="2025-07-09T23:35:02.941376811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:02.942284 containerd[1484]: time="2025-07-09T23:35:02.942150621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.279392321s" Jul 9 23:35:02.942284 containerd[1484]: time="2025-07-09T23:35:02.942200850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 9 23:35:02.944429 containerd[1484]: time="2025-07-09T23:35:02.944387144Z" level=info msg="CreateContainer within sandbox \"b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 23:35:02.973226 containerd[1484]: time="2025-07-09T23:35:02.973144894Z" level=info msg="CreateContainer within sandbox \"b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a\"" Jul 9 23:35:02.974220 containerd[1484]: time="2025-07-09T23:35:02.974078372Z" level=info msg="StartContainer for \"029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a\"" Jul 9 23:35:03.002383 systemd[1]: Started cri-containerd-029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a.scope - libcontainer container 029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a. Jul 9 23:35:03.039835 containerd[1484]: time="2025-07-09T23:35:03.039777082Z" level=info msg="StartContainer for \"029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a\" returns successfully" Jul 9 23:35:03.362284 kubelet[1791]: E0709 23:35:03.362229 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:03.595443 containerd[1484]: time="2025-07-09T23:35:03.595382937Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:35:03.597558 systemd[1]: cri-containerd-029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a.scope: Deactivated successfully. Jul 9 23:35:03.598296 systemd[1]: cri-containerd-029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a.scope: Consumed 501ms CPU time, 190M memory peak, 165.8M written to disk. Jul 9 23:35:03.618554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a-rootfs.mount: Deactivated successfully. Jul 9 23:35:03.635783 kubelet[1791]: E0709 23:35:03.635728 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:35:03.669796 kubelet[1791]: I0709 23:35:03.669762 1791 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 23:35:03.819766 containerd[1484]: time="2025-07-09T23:35:03.819691108Z" level=info msg="shim disconnected" id=029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a namespace=k8s.io Jul 9 23:35:03.819766 containerd[1484]: time="2025-07-09T23:35:03.819757645Z" level=warning msg="cleaning up after shim disconnected" id=029bd9ee0fc6cec1ea91e069b5791a713fb727ba410a4a2d39c48ecd28c0dc8a namespace=k8s.io Jul 9 23:35:03.819766 containerd[1484]: time="2025-07-09T23:35:03.819767745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:35:04.254714 systemd[1]: Created slice kubepods-besteffort-pod2f9b95c7_7584_46e8_96cc_f70c580e69a9.slice - libcontainer container kubepods-besteffort-pod2f9b95c7_7584_46e8_96cc_f70c580e69a9.slice. Jul 9 23:35:04.362910 kubelet[1791]: E0709 23:35:04.362826 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:04.370144 kubelet[1791]: I0709 23:35:04.370089 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f9b95c7-7584-46e8-96cc-f70c580e69a9-whisker-ca-bundle\") pod \"whisker-5888cd88b7-6dcnq\" (UID: \"2f9b95c7-7584-46e8-96cc-f70c580e69a9\") " pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:04.370144 kubelet[1791]: I0709 23:35:04.370131 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctmp\" (UniqueName: \"kubernetes.io/projected/2f9b95c7-7584-46e8-96cc-f70c580e69a9-kube-api-access-8ctmp\") pod \"whisker-5888cd88b7-6dcnq\" (UID: \"2f9b95c7-7584-46e8-96cc-f70c580e69a9\") " pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:04.370144 kubelet[1791]: I0709 23:35:04.370155 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f9b95c7-7584-46e8-96cc-f70c580e69a9-whisker-backend-key-pair\") pod \"whisker-5888cd88b7-6dcnq\" (UID: \"2f9b95c7-7584-46e8-96cc-f70c580e69a9\") " pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:04.558755 containerd[1484]: time="2025-07-09T23:35:04.558575859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:0,}" Jul 9 23:35:04.701394 containerd[1484]: time="2025-07-09T23:35:04.701229527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 23:35:05.364560 kubelet[1791]: E0709 23:35:05.364350 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:05.645828 systemd[1]: Created slice kubepods-besteffort-pod9a1d5f75_fc9c_42c6_bd3d_df9580bb8b72.slice - libcontainer container kubepods-besteffort-pod9a1d5f75_fc9c_42c6_bd3d_df9580bb8b72.slice. Jul 9 23:35:05.655754 containerd[1484]: time="2025-07-09T23:35:05.655713099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:0,}" Jul 9 23:35:05.819655 containerd[1484]: time="2025-07-09T23:35:05.818816529Z" level=error msg="Failed to destroy network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.819655 containerd[1484]: time="2025-07-09T23:35:05.819193848Z" level=error msg="encountered an error cleaning up failed sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.819655 containerd[1484]: time="2025-07-09T23:35:05.819416889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.820326 kubelet[1791]: E0709 23:35:05.819619 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.820326 kubelet[1791]: E0709 23:35:05.819686 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:05.820326 kubelet[1791]: E0709 23:35:05.819707 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:05.820720 kubelet[1791]: E0709 23:35:05.819745 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5888cd88b7-6dcnq" podUID="2f9b95c7-7584-46e8-96cc-f70c580e69a9" Jul 9 23:35:05.822402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c-shm.mount: Deactivated successfully. Jul 9 23:35:05.989902 containerd[1484]: time="2025-07-09T23:35:05.989843418Z" level=error msg="Failed to destroy network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.991803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87-shm.mount: Deactivated successfully. Jul 9 23:35:05.994276 containerd[1484]: time="2025-07-09T23:35:05.993251953Z" level=error msg="encountered an error cleaning up failed sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.994276 containerd[1484]: time="2025-07-09T23:35:05.993336585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.994496 kubelet[1791]: E0709 23:35:05.993581 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:05.994496 kubelet[1791]: E0709 23:35:05.993639 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:05.994496 kubelet[1791]: E0709 23:35:05.993673 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:05.994699 kubelet[1791]: E0709 23:35:05.993718 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:35:06.365556 kubelet[1791]: E0709 23:35:06.365433 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:06.708849 kubelet[1791]: I0709 23:35:06.708471 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87" Jul 9 23:35:06.709877 containerd[1484]: time="2025-07-09T23:35:06.709525951Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\"" Jul 9 23:35:06.709877 containerd[1484]: time="2025-07-09T23:35:06.709687504Z" level=info msg="Ensure that sandbox 68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87 in task-service has been cleanup successfully" Jul 9 23:35:06.709877 containerd[1484]: time="2025-07-09T23:35:06.709877064Z" level=info msg="TearDown network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" successfully" Jul 9 23:35:06.710788 containerd[1484]: time="2025-07-09T23:35:06.709893131Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" returns successfully" Jul 9 23:35:06.710926 kubelet[1791]: I0709 23:35:06.709907 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c" Jul 9 23:35:06.712083 containerd[1484]: time="2025-07-09T23:35:06.711258275Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\"" Jul 9 23:35:06.712083 containerd[1484]: time="2025-07-09T23:35:06.711421270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:1,}" Jul 9 23:35:06.712083 containerd[1484]: time="2025-07-09T23:35:06.711947518Z" level=info msg="Ensure that sandbox 5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c in task-service has been cleanup successfully" Jul 9 23:35:06.713530 containerd[1484]: time="2025-07-09T23:35:06.712899765Z" level=info msg="TearDown network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" successfully" Jul 9 23:35:06.713530 containerd[1484]: time="2025-07-09T23:35:06.712923925Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" returns successfully" Jul 9 23:35:06.715342 containerd[1484]: time="2025-07-09T23:35:06.714835992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:1,}" Jul 9 23:35:06.713919 systemd[1]: run-netns-cni\x2ddebdb28e\x2ddc04\x2dcd56\x2dc444\x2db208836c1ec0.mount: Deactivated successfully. Jul 9 23:35:06.714019 systemd[1]: run-netns-cni\x2def97727f\x2d48c3\x2d86cb\x2d677a\x2dd1ff56e4913e.mount: Deactivated successfully. Jul 9 23:35:07.003777 containerd[1484]: time="2025-07-09T23:35:07.003720612Z" level=error msg="Failed to destroy network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.005851 containerd[1484]: time="2025-07-09T23:35:07.005799341Z" level=error msg="encountered an error cleaning up failed sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.006011 containerd[1484]: time="2025-07-09T23:35:07.005989361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.006811 kubelet[1791]: E0709 23:35:07.006459 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.006811 kubelet[1791]: E0709 23:35:07.006524 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:07.006811 kubelet[1791]: E0709 23:35:07.006544 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:07.007046 kubelet[1791]: E0709 23:35:07.006583 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:35:07.010378 containerd[1484]: time="2025-07-09T23:35:07.010325862Z" level=error msg="Failed to destroy network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.011106 containerd[1484]: time="2025-07-09T23:35:07.010959265Z" level=error msg="encountered an error cleaning up failed sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.011106 containerd[1484]: time="2025-07-09T23:35:07.011017677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.011743 kubelet[1791]: E0709 23:35:07.011572 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.011743 kubelet[1791]: E0709 23:35:07.011637 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:07.011743 kubelet[1791]: E0709 23:35:07.011655 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:07.012367 kubelet[1791]: E0709 23:35:07.011690 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5888cd88b7-6dcnq" podUID="2f9b95c7-7584-46e8-96cc-f70c580e69a9" Jul 9 23:35:07.254952 systemd[1]: Created slice kubepods-besteffort-podd457ae52_d230_4815_86e7_7ea55c129cf4.slice - libcontainer container kubepods-besteffort-podd457ae52_d230_4815_86e7_7ea55c129cf4.slice. Jul 9 23:35:07.317270 kubelet[1791]: I0709 23:35:07.316973 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44flc\" (UniqueName: \"kubernetes.io/projected/d457ae52-d230-4815-86e7-7ea55c129cf4-kube-api-access-44flc\") pod \"nginx-deployment-7fcdb87857-gqh5f\" (UID: \"d457ae52-d230-4815-86e7-7ea55c129cf4\") " pod="default/nginx-deployment-7fcdb87857-gqh5f" Jul 9 23:35:07.365775 kubelet[1791]: E0709 23:35:07.365733 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:07.561837 containerd[1484]: time="2025-07-09T23:35:07.561401681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:0,}" Jul 9 23:35:07.701655 containerd[1484]: time="2025-07-09T23:35:07.701599259Z" level=error msg="Failed to destroy network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.702251 containerd[1484]: time="2025-07-09T23:35:07.702216395Z" level=error msg="encountered an error cleaning up failed sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.702718 containerd[1484]: time="2025-07-09T23:35:07.702659937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.703403 kubelet[1791]: E0709 23:35:07.702983 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.703403 kubelet[1791]: E0709 23:35:07.703042 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gqh5f" Jul 9 23:35:07.703403 kubelet[1791]: E0709 23:35:07.703062 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gqh5f" Jul 9 23:35:07.703560 kubelet[1791]: E0709 23:35:07.703135 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-gqh5f_default(d457ae52-d230-4815-86e7-7ea55c129cf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-gqh5f_default(d457ae52-d230-4815-86e7-7ea55c129cf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-gqh5f" podUID="d457ae52-d230-4815-86e7-7ea55c129cf4" Jul 9 23:35:07.716318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772-shm.mount: Deactivated successfully. Jul 9 23:35:07.716442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03-shm.mount: Deactivated successfully. Jul 9 23:35:07.719777 kubelet[1791]: I0709 23:35:07.719011 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03" Jul 9 23:35:07.719877 containerd[1484]: time="2025-07-09T23:35:07.719836353Z" level=info msg="StopPodSandbox for \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\"" Jul 9 23:35:07.720221 containerd[1484]: time="2025-07-09T23:35:07.720032864Z" level=info msg="Ensure that sandbox 5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03 in task-service has been cleanup successfully" Jul 9 23:35:07.720750 containerd[1484]: time="2025-07-09T23:35:07.720368034Z" level=info msg="TearDown network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\" successfully" Jul 9 23:35:07.720750 containerd[1484]: time="2025-07-09T23:35:07.720391591Z" level=info msg="StopPodSandbox for \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\" returns successfully" Jul 9 23:35:07.720934 kubelet[1791]: I0709 23:35:07.720887 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2" Jul 9 23:35:07.724013 containerd[1484]: time="2025-07-09T23:35:07.721488327Z" level=info msg="StopPodSandbox for \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\"" Jul 9 23:35:07.724013 containerd[1484]: time="2025-07-09T23:35:07.721546298Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\"" Jul 9 23:35:07.724013 containerd[1484]: time="2025-07-09T23:35:07.721631633Z" level=info msg="TearDown network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" successfully" Jul 9 23:35:07.724013 containerd[1484]: time="2025-07-09T23:35:07.721642450Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" returns successfully" Jul 9 23:35:07.724013 containerd[1484]: time="2025-07-09T23:35:07.721673500Z" level=info msg="Ensure that sandbox 82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2 in task-service has been cleanup successfully" Jul 9 23:35:07.721676 systemd[1]: run-netns-cni\x2de8426c76\x2d660b\x2d3ec0\x2d0450\x2d491bf20dde3f.mount: Deactivated successfully. Jul 9 23:35:07.723747 systemd[1]: run-netns-cni\x2ddd3a0645\x2d5c46\x2d4269\x2dc4b7\x2d6825ec048c84.mount: Deactivated successfully. Jul 9 23:35:07.725780 containerd[1484]: time="2025-07-09T23:35:07.725108334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:2,}" Jul 9 23:35:07.725969 containerd[1484]: time="2025-07-09T23:35:07.725940651Z" level=info msg="TearDown network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\" successfully" Jul 9 23:35:07.726043 containerd[1484]: time="2025-07-09T23:35:07.726027909Z" level=info msg="StopPodSandbox for \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\" returns successfully" Jul 9 23:35:07.727338 containerd[1484]: time="2025-07-09T23:35:07.727299761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:1,}" Jul 9 23:35:07.729514 kubelet[1791]: I0709 23:35:07.729014 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772" Jul 9 23:35:07.729772 containerd[1484]: time="2025-07-09T23:35:07.729730728Z" level=info msg="StopPodSandbox for \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\"" Jul 9 23:35:07.729927 containerd[1484]: time="2025-07-09T23:35:07.729902880Z" level=info msg="Ensure that sandbox 62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772 in task-service has been cleanup successfully" Jul 9 23:35:07.731592 systemd[1]: run-netns-cni\x2d633c74a8\x2d9a79\x2d438c\x2dab15\x2d5fd45229610d.mount: Deactivated successfully. Jul 9 23:35:07.732369 containerd[1484]: time="2025-07-09T23:35:07.732326955Z" level=info msg="TearDown network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\" successfully" Jul 9 23:35:07.732369 containerd[1484]: time="2025-07-09T23:35:07.732364094Z" level=info msg="StopPodSandbox for \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\" returns successfully" Jul 9 23:35:07.733670 containerd[1484]: time="2025-07-09T23:35:07.732760081Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\"" Jul 9 23:35:07.733670 containerd[1484]: time="2025-07-09T23:35:07.732845135Z" level=info msg="TearDown network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" successfully" Jul 9 23:35:07.733670 containerd[1484]: time="2025-07-09T23:35:07.732856433Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" returns successfully" Jul 9 23:35:07.734437 containerd[1484]: time="2025-07-09T23:35:07.734333610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:2,}" Jul 9 23:35:07.850798 containerd[1484]: time="2025-07-09T23:35:07.850633737Z" level=error msg="Failed to destroy network for sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.852420 containerd[1484]: time="2025-07-09T23:35:07.851871776Z" level=error msg="encountered an error cleaning up failed sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.852420 containerd[1484]: time="2025-07-09T23:35:07.851960076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.852730 kubelet[1791]: E0709 23:35:07.852196 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.852730 kubelet[1791]: E0709 23:35:07.852253 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:07.852730 kubelet[1791]: E0709 23:35:07.852276 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:07.852847 kubelet[1791]: E0709 23:35:07.852314 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5888cd88b7-6dcnq" podUID="2f9b95c7-7584-46e8-96cc-f70c580e69a9" Jul 9 23:35:07.856613 containerd[1484]: time="2025-07-09T23:35:07.856400542Z" level=error msg="Failed to destroy network for sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.857219 containerd[1484]: time="2025-07-09T23:35:07.857184141Z" level=error msg="encountered an error cleaning up failed sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.857290 containerd[1484]: time="2025-07-09T23:35:07.857261223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.857536 kubelet[1791]: E0709 23:35:07.857489 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.857587 kubelet[1791]: E0709 23:35:07.857563 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:07.857610 kubelet[1791]: E0709 23:35:07.857584 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:07.857658 kubelet[1791]: E0709 23:35:07.857634 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:35:07.867622 containerd[1484]: time="2025-07-09T23:35:07.867543812Z" level=error msg="Failed to destroy network for sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.868261 containerd[1484]: time="2025-07-09T23:35:07.868213392Z" level=error msg="encountered an error cleaning up failed sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.868361 containerd[1484]: time="2025-07-09T23:35:07.868289632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.868609 kubelet[1791]: E0709 23:35:07.868566 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:07.868660 kubelet[1791]: E0709 23:35:07.868629 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gqh5f" Jul 9 23:35:07.868660 kubelet[1791]: E0709 23:35:07.868648 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gqh5f" Jul 9 23:35:07.868778 kubelet[1791]: E0709 23:35:07.868694 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-gqh5f_default(d457ae52-d230-4815-86e7-7ea55c129cf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-gqh5f_default(d457ae52-d230-4815-86e7-7ea55c129cf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-gqh5f" podUID="d457ae52-d230-4815-86e7-7ea55c129cf4" Jul 9 23:35:08.366631 kubelet[1791]: E0709 23:35:08.366596 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:08.714523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e-shm.mount: Deactivated successfully. Jul 9 23:35:08.733935 kubelet[1791]: I0709 23:35:08.733861 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686" Jul 9 23:35:08.735397 containerd[1484]: time="2025-07-09T23:35:08.735105258Z" level=info msg="StopPodSandbox for \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\"" Jul 9 23:35:08.735397 containerd[1484]: time="2025-07-09T23:35:08.735356791Z" level=info msg="Ensure that sandbox 5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686 in task-service has been cleanup successfully" Jul 9 23:35:08.735804 containerd[1484]: time="2025-07-09T23:35:08.735577439Z" level=info msg="TearDown network for sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\" successfully" Jul 9 23:35:08.735804 containerd[1484]: time="2025-07-09T23:35:08.735595626Z" level=info msg="StopPodSandbox for \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\" returns successfully" Jul 9 23:35:08.736526 containerd[1484]: time="2025-07-09T23:35:08.736496802Z" level=info msg="StopPodSandbox for \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\"" Jul 9 23:35:08.736624 containerd[1484]: time="2025-07-09T23:35:08.736592665Z" level=info msg="TearDown network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\" successfully" Jul 9 23:35:08.736624 containerd[1484]: time="2025-07-09T23:35:08.736604202Z" level=info msg="StopPodSandbox for \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\" returns successfully" Jul 9 23:35:08.737359 containerd[1484]: time="2025-07-09T23:35:08.736950836Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\"" Jul 9 23:35:08.737359 containerd[1484]: time="2025-07-09T23:35:08.737048180Z" level=info msg="TearDown network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" successfully" Jul 9 23:35:08.737359 containerd[1484]: time="2025-07-09T23:35:08.737059317Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" returns successfully" Jul 9 23:35:08.737507 containerd[1484]: time="2025-07-09T23:35:08.737490877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:3,}" Jul 9 23:35:08.737628 systemd[1]: run-netns-cni\x2d31797342\x2d8a25\x2d4d4f\x2d451a\x2d6205f3cd8190.mount: Deactivated successfully. Jul 9 23:35:08.738198 kubelet[1791]: I0709 23:35:08.738145 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e" Jul 9 23:35:08.739064 containerd[1484]: time="2025-07-09T23:35:08.739023350Z" level=info msg="StopPodSandbox for \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\"" Jul 9 23:35:08.739765 containerd[1484]: time="2025-07-09T23:35:08.739538635Z" level=info msg="Ensure that sandbox 1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e in task-service has been cleanup successfully" Jul 9 23:35:08.739976 containerd[1484]: time="2025-07-09T23:35:08.739947802Z" level=info msg="TearDown network for sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\" successfully" Jul 9 23:35:08.740193 containerd[1484]: time="2025-07-09T23:35:08.740058886Z" level=info msg="StopPodSandbox for \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\" returns successfully" Jul 9 23:35:08.741541 containerd[1484]: time="2025-07-09T23:35:08.741372195Z" level=info msg="StopPodSandbox for \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\"" Jul 9 23:35:08.741541 containerd[1484]: time="2025-07-09T23:35:08.741473505Z" level=info msg="TearDown network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\" successfully" Jul 9 23:35:08.741541 containerd[1484]: time="2025-07-09T23:35:08.741483480Z" level=info msg="StopPodSandbox for \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\" returns successfully" Jul 9 23:35:08.741705 systemd[1]: run-netns-cni\x2d97600760\x2d1d90\x2d429f\x2d77e1\x2d252f931a7ec9.mount: Deactivated successfully. Jul 9 23:35:08.742067 kubelet[1791]: I0709 23:35:08.741897 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75" Jul 9 23:35:08.742133 containerd[1484]: time="2025-07-09T23:35:08.742008298Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\"" Jul 9 23:35:08.742133 containerd[1484]: time="2025-07-09T23:35:08.742097511Z" level=info msg="TearDown network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" successfully" Jul 9 23:35:08.742133 containerd[1484]: time="2025-07-09T23:35:08.742108247Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" returns successfully" Jul 9 23:35:08.742744 containerd[1484]: time="2025-07-09T23:35:08.742571293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:3,}" Jul 9 23:35:08.742744 containerd[1484]: time="2025-07-09T23:35:08.742710260Z" level=info msg="StopPodSandbox for \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\"" Jul 9 23:35:08.742876 containerd[1484]: time="2025-07-09T23:35:08.742853512Z" level=info msg="Ensure that sandbox 3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75 in task-service has been cleanup successfully" Jul 9 23:35:08.743256 containerd[1484]: time="2025-07-09T23:35:08.743220256Z" level=info msg="TearDown network for sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\" successfully" Jul 9 23:35:08.743256 containerd[1484]: time="2025-07-09T23:35:08.743247977Z" level=info msg="StopPodSandbox for \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\" returns successfully" Jul 9 23:35:08.743855 containerd[1484]: time="2025-07-09T23:35:08.743705176Z" level=info msg="StopPodSandbox for \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\"" Jul 9 23:35:08.743855 containerd[1484]: time="2025-07-09T23:35:08.743788139Z" level=info msg="TearDown network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\" successfully" Jul 9 23:35:08.743855 containerd[1484]: time="2025-07-09T23:35:08.743797833Z" level=info msg="StopPodSandbox for \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\" returns successfully" Jul 9 23:35:08.744532 systemd[1]: run-netns-cni\x2dc18ba347\x2d7e2e\x2ddde2\x2d408b\x2dc449aadbc83e.mount: Deactivated successfully. Jul 9 23:35:08.744910 containerd[1484]: time="2025-07-09T23:35:08.744573704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:2,}" Jul 9 23:35:08.951308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851238134.mount: Deactivated successfully. Jul 9 23:35:09.058206 containerd[1484]: time="2025-07-09T23:35:09.058135819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:09.062515 containerd[1484]: time="2025-07-09T23:35:09.062417614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 9 23:35:09.067381 containerd[1484]: time="2025-07-09T23:35:09.067328925Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:09.073654 containerd[1484]: time="2025-07-09T23:35:09.073330392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:09.073993 containerd[1484]: time="2025-07-09T23:35:09.073961390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.372682855s" Jul 9 23:35:09.074037 containerd[1484]: time="2025-07-09T23:35:09.073994115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 9 23:35:09.089402 containerd[1484]: time="2025-07-09T23:35:09.089341622Z" level=info msg="CreateContainer within sandbox \"b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 23:35:09.123122 containerd[1484]: time="2025-07-09T23:35:09.123064004Z" level=error msg="Failed to destroy network for sandbox \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.123466 containerd[1484]: time="2025-07-09T23:35:09.123423625Z" level=error msg="encountered an error cleaning up failed sandbox \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.123535 containerd[1484]: time="2025-07-09T23:35:09.123492120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.123761 kubelet[1791]: E0709 23:35:09.123717 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.123829 kubelet[1791]: E0709 23:35:09.123789 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:09.123829 kubelet[1791]: E0709 23:35:09.123812 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5888cd88b7-6dcnq" Jul 9 23:35:09.123883 kubelet[1791]: E0709 23:35:09.123851 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5888cd88b7-6dcnq_calico-system(2f9b95c7-7584-46e8-96cc-f70c580e69a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5888cd88b7-6dcnq" podUID="2f9b95c7-7584-46e8-96cc-f70c580e69a9" Jul 9 23:35:09.131062 containerd[1484]: time="2025-07-09T23:35:09.130988867Z" level=error msg="Failed to destroy network for sandbox \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.131387 containerd[1484]: time="2025-07-09T23:35:09.131286561Z" level=error msg="Failed to destroy network for sandbox \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.131387 containerd[1484]: time="2025-07-09T23:35:09.131349168Z" level=error msg="encountered an error cleaning up failed sandbox \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.131499 containerd[1484]: time="2025-07-09T23:35:09.131426996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.131705 kubelet[1791]: E0709 23:35:09.131658 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.131768 kubelet[1791]: E0709 23:35:09.131724 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:09.131768 kubelet[1791]: E0709 23:35:09.131748 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j68cs" Jul 9 23:35:09.131822 kubelet[1791]: E0709 23:35:09.131788 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j68cs_calico-system(9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j68cs" podUID="9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72" Jul 9 23:35:09.132147 containerd[1484]: time="2025-07-09T23:35:09.131971674Z" level=error msg="encountered an error cleaning up failed sandbox \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.132147 containerd[1484]: time="2025-07-09T23:35:09.132061278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.132380 kubelet[1791]: E0709 23:35:09.132230 1791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:35:09.132380 kubelet[1791]: E0709 23:35:09.132256 1791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gqh5f" Jul 9 23:35:09.132380 kubelet[1791]: E0709 23:35:09.132300 1791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gqh5f" Jul 9 23:35:09.132530 kubelet[1791]: E0709 23:35:09.132330 1791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-gqh5f_default(d457ae52-d230-4815-86e7-7ea55c129cf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-gqh5f_default(d457ae52-d230-4815-86e7-7ea55c129cf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-gqh5f" podUID="d457ae52-d230-4815-86e7-7ea55c129cf4" Jul 9 23:35:09.136065 containerd[1484]: time="2025-07-09T23:35:09.136012454Z" level=info msg="CreateContainer within sandbox \"b5e3276d394b0a7ae0b8ef986ad08b18b65a22bfc5fec333159ad88c37772409\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5236147eacdfb98a3678d7828da3f8dfa8af6eb9a2ceaa0913b800ed10c0c6fd\"" Jul 9 23:35:09.138127 containerd[1484]: time="2025-07-09T23:35:09.136564542Z" level=info msg="StartContainer for \"5236147eacdfb98a3678d7828da3f8dfa8af6eb9a2ceaa0913b800ed10c0c6fd\"" Jul 9 23:35:09.166411 systemd[1]: Started cri-containerd-5236147eacdfb98a3678d7828da3f8dfa8af6eb9a2ceaa0913b800ed10c0c6fd.scope - libcontainer container 5236147eacdfb98a3678d7828da3f8dfa8af6eb9a2ceaa0913b800ed10c0c6fd. Jul 9 23:35:09.197850 containerd[1484]: time="2025-07-09T23:35:09.197800151Z" level=info msg="StartContainer for \"5236147eacdfb98a3678d7828da3f8dfa8af6eb9a2ceaa0913b800ed10c0c6fd\" returns successfully" Jul 9 23:35:09.368064 kubelet[1791]: E0709 23:35:09.367326 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:09.395939 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 23:35:09.396070 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 23:35:09.748255 kubelet[1791]: I0709 23:35:09.748226 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641" Jul 9 23:35:09.748878 containerd[1484]: time="2025-07-09T23:35:09.748791698Z" level=info msg="StopPodSandbox for \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\"" Jul 9 23:35:09.749581 containerd[1484]: time="2025-07-09T23:35:09.748969626Z" level=info msg="Ensure that sandbox 28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641 in task-service has been cleanup successfully" Jul 9 23:35:09.750599 systemd[1]: run-netns-cni\x2d071e937b\x2d358d\x2d39cd\x2db5c9\x2d313a9fbb953a.mount: Deactivated successfully. Jul 9 23:35:09.751290 kubelet[1791]: I0709 23:35:09.750618 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b" Jul 9 23:35:09.751333 containerd[1484]: time="2025-07-09T23:35:09.750645116Z" level=info msg="TearDown network for sandbox \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\" successfully" Jul 9 23:35:09.751333 containerd[1484]: time="2025-07-09T23:35:09.750678803Z" level=info msg="StopPodSandbox for \"28d409c1eac49df324d79b9bb3961b292840d9dea7e53d8d481e89e5febf1641\" returns successfully" Jul 9 23:35:09.751968 containerd[1484]: time="2025-07-09T23:35:09.751548012Z" level=info msg="StopPodSandbox for \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\"" Jul 9 23:35:09.751968 containerd[1484]: time="2025-07-09T23:35:09.751690810Z" level=info msg="StopPodSandbox for \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\"" Jul 9 23:35:09.751968 containerd[1484]: time="2025-07-09T23:35:09.751820190Z" level=info msg="TearDown network for sandbox \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\" successfully" Jul 9 23:35:09.751968 containerd[1484]: time="2025-07-09T23:35:09.751834050Z" level=info msg="Ensure that sandbox ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b in task-service has been cleanup successfully" Jul 9 23:35:09.752371 containerd[1484]: time="2025-07-09T23:35:09.751834931Z" level=info msg="StopPodSandbox for \"5eb418cde0b13b5a39bf6a8b81d2633339801e568233a7170c5920502136b686\" returns successfully" Jul 9 23:35:09.752371 containerd[1484]: time="2025-07-09T23:35:09.752342437Z" level=info msg="TearDown network for sandbox \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\" successfully" Jul 9 23:35:09.752371 containerd[1484]: time="2025-07-09T23:35:09.752361063Z" level=info msg="StopPodSandbox for \"ed7345634733a9fa221fa8c2650ac42e3dcd71366afa421ac282dae6d8f2c62b\" returns successfully" Jul 9 23:35:09.752995 containerd[1484]: time="2025-07-09T23:35:09.752835523Z" level=info msg="StopPodSandbox for \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\"" Jul 9 23:35:09.752995 containerd[1484]: time="2025-07-09T23:35:09.752921562Z" level=info msg="TearDown network for sandbox \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\" successfully" Jul 9 23:35:09.752995 containerd[1484]: time="2025-07-09T23:35:09.752932538Z" level=info msg="StopPodSandbox for \"1b95d2dce72e1c98e61c19cf6616f88a042bbe424c377c23034247a67eb4cc9e\" returns successfully" Jul 9 23:35:09.753478 containerd[1484]: time="2025-07-09T23:35:09.752940188Z" level=info msg="StopPodSandbox for \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\"" Jul 9 23:35:09.753478 containerd[1484]: time="2025-07-09T23:35:09.753119598Z" level=info msg="TearDown network for sandbox \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\" successfully" Jul 9 23:35:09.753478 containerd[1484]: time="2025-07-09T23:35:09.753129612Z" level=info msg="StopPodSandbox for \"62baa500ae1d2cdf4d9c055d2bc7d9062b0b56bfdb4087333f2f131120a2e772\" returns successfully" Jul 9 23:35:09.753400 systemd[1]: run-netns-cni\x2dca1c075f\x2da3e5\x2deae8\x2d5383\x2da5ee57bed6ec.mount: Deactivated successfully. Jul 9 23:35:09.753643 containerd[1484]: time="2025-07-09T23:35:09.753538500Z" level=info msg="StopPodSandbox for \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\"" Jul 9 23:35:09.753643 containerd[1484]: time="2025-07-09T23:35:09.753541905Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\"" Jul 9 23:35:09.753643 containerd[1484]: time="2025-07-09T23:35:09.753619293Z" level=info msg="TearDown network for sandbox \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\" successfully" Jul 9 23:35:09.753643 containerd[1484]: time="2025-07-09T23:35:09.753629867Z" level=info msg="StopPodSandbox for \"5fc3d56edd1a465c9dac20bba1da1aa0db9ed8055fd0c93e3592c6eb7a4eef03\" returns successfully" Jul 9 23:35:09.753870 containerd[1484]: time="2025-07-09T23:35:09.753701807Z" level=info msg="TearDown network for sandbox \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" successfully" Jul 9 23:35:09.753870 containerd[1484]: time="2025-07-09T23:35:09.753712743Z" level=info msg="StopPodSandbox for \"68df3ccb6a3bdcce7fb3bcd166ffba6123eebdc493b120f50cb5967766578c87\" returns successfully" Jul 9 23:35:09.754398 containerd[1484]: time="2025-07-09T23:35:09.754372981Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\"" Jul 9 23:35:09.754491 containerd[1484]: time="2025-07-09T23:35:09.754474803Z" level=info msg="TearDown network for sandbox \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" successfully" Jul 9 23:35:09.754491 containerd[1484]: time="2025-07-09T23:35:09.754488662Z" level=info msg="StopPodSandbox for \"5ff0e6dffda6b01809b6f494975f93bcfb388674161a5b0166c8dd31df8b943c\" returns successfully" Jul 9 23:35:09.754643 containerd[1484]: time="2025-07-09T23:35:09.754624210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:4,}" Jul 9 23:35:09.755730 kubelet[1791]: I0709 23:35:09.755707 1791 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff" Jul 9 23:35:09.755906 containerd[1484]: time="2025-07-09T23:35:09.755877073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:4,}" Jul 9 23:35:09.756244 containerd[1484]: time="2025-07-09T23:35:09.756146448Z" level=info msg="StopPodSandbox for \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\"" Jul 9 23:35:09.756336 containerd[1484]: time="2025-07-09T23:35:09.756317485Z" level=info msg="Ensure that sandbox 1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff in task-service has been cleanup successfully" Jul 9 23:35:09.756487 containerd[1484]: time="2025-07-09T23:35:09.756470418Z" level=info msg="TearDown network for sandbox \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\" successfully" Jul 9 23:35:09.756574 containerd[1484]: time="2025-07-09T23:35:09.756486240Z" level=info msg="StopPodSandbox for \"1113a54c5f963434fe2cb38ac95cfa11277b618aa156a1c9e33be781fd5c5aff\" returns successfully" Jul 9 23:35:09.756882 containerd[1484]: time="2025-07-09T23:35:09.756861082Z" level=info msg="StopPodSandbox for \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\"" Jul 9 23:35:09.756955 containerd[1484]: time="2025-07-09T23:35:09.756939791Z" level=info msg="TearDown network for sandbox \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\" successfully" Jul 9 23:35:09.756988 containerd[1484]: time="2025-07-09T23:35:09.756953891Z" level=info msg="StopPodSandbox for \"3d4845b076ea396fdf8031df2f96c5a83735c7b43fa00d24a53cb7b4101dfc75\" returns successfully" Jul 9 23:35:09.757749 systemd[1]: run-netns-cni\x2dd0d23da1\x2d007e\x2d7c09\x2d06c1\x2d03a0bc00bd96.mount: Deactivated successfully. Jul 9 23:35:09.758401 containerd[1484]: time="2025-07-09T23:35:09.758369500Z" level=info msg="StopPodSandbox for \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\"" Jul 9 23:35:09.758488 containerd[1484]: time="2025-07-09T23:35:09.758474486Z" level=info msg="TearDown network for sandbox \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\" successfully" Jul 9 23:35:09.758518 containerd[1484]: time="2025-07-09T23:35:09.758487984Z" level=info msg="StopPodSandbox for \"82888d0e912853d2f1fe797a6fce680953fb5936cf723d427cc7e3894c683cd2\" returns successfully" Jul 9 23:35:09.758939 containerd[1484]: time="2025-07-09T23:35:09.758905845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:3,}" Jul 9 23:35:09.807208 kubelet[1791]: I0709 23:35:09.807000 1791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m5mc2" podStartSLOduration=4.871393434 podStartE2EDuration="16.806981752s" podCreationTimestamp="2025-07-09 23:34:53 +0000 UTC" firstStartedPulling="2025-07-09 23:34:57.144625328 +0000 UTC m=+5.086817454" lastFinishedPulling="2025-07-09 23:35:09.080213686 +0000 UTC m=+17.022405772" observedRunningTime="2025-07-09 23:35:09.806510897 +0000 UTC m=+17.748703023" watchObservedRunningTime="2025-07-09 23:35:09.806981752 +0000 UTC m=+17.749173878" Jul 9 23:35:10.077768 systemd-networkd[1398]: cali92aed0a5738: Link UP Jul 9 23:35:10.080330 systemd-networkd[1398]: cali92aed0a5738: Gained carrier Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:09.863 [INFO][2734] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:09.883 [INFO][2734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0 nginx-deployment-7fcdb87857- default d457ae52-d230-4815-86e7-7ea55c129cf4 1247 0 2025-07-09 23:35:07 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.19 nginx-deployment-7fcdb87857-gqh5f eth0 default [] [] [kns.default ksa.default.default] cali92aed0a5738 [] [] }} ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:09.883 [INFO][2734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2766] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" HandleID="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Workload="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2766] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" HandleID="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Workload="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004261b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.19", "pod":"nginx-deployment-7fcdb87857-gqh5f", "timestamp":"2025-07-09 23:35:10.015224969 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2766] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.026 [INFO][2766] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.035 [INFO][2766] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.042 [INFO][2766] ipam/ipam.go 511: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.045 [INFO][2766] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.048 [INFO][2766] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.048 [INFO][2766] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.051 [INFO][2766] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45 Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.060 [INFO][2766] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.069 [INFO][2766] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.1/26] block=192.168.37.0/26 handle="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.069 [INFO][2766] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.1/26] handle="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" host="10.0.0.19" Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.069 [INFO][2766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:35:10.090720 containerd[1484]: 2025-07-09 23:35:10.069 [INFO][2766] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.1/26] IPv6=[] ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" HandleID="k8s-pod-network.813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Workload="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" Jul 9 23:35:10.091305 containerd[1484]: 2025-07-09 23:35:10.072 [INFO][2734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"d457ae52-d230-4815-86e7-7ea55c129cf4", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-gqh5f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali92aed0a5738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:10.091305 containerd[1484]: 2025-07-09 23:35:10.072 [INFO][2734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.1/32] ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" Jul 9 23:35:10.091305 containerd[1484]: 2025-07-09 23:35:10.072 [INFO][2734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92aed0a5738 ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" Jul 9 23:35:10.091305 containerd[1484]: 2025-07-09 23:35:10.077 [INFO][2734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" Jul 9 23:35:10.091305 containerd[1484]: 2025-07-09 23:35:10.077 [INFO][2734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"d457ae52-d230-4815-86e7-7ea55c129cf4", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45", Pod:"nginx-deployment-7fcdb87857-gqh5f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali92aed0a5738", MAC:"6e:ed:34:7e:da:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:10.091305 containerd[1484]: 2025-07-09 23:35:10.089 [INFO][2734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45" Namespace="default" Pod="nginx-deployment-7fcdb87857-gqh5f" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--7fcdb87857--gqh5f-eth0" Jul 9 23:35:10.106091 containerd[1484]: time="2025-07-09T23:35:10.105956126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:35:10.106091 containerd[1484]: time="2025-07-09T23:35:10.106031184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:35:10.106091 containerd[1484]: time="2025-07-09T23:35:10.106043200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:10.106346 containerd[1484]: time="2025-07-09T23:35:10.106145653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:10.125373 systemd[1]: Started cri-containerd-813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45.scope - libcontainer container 813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45. Jul 9 23:35:10.137358 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:35:10.157433 containerd[1484]: time="2025-07-09T23:35:10.157394323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gqh5f,Uid:d457ae52-d230-4815-86e7-7ea55c129cf4,Namespace:default,Attempt:3,} returns sandbox id \"813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45\"" Jul 9 23:35:10.158991 containerd[1484]: time="2025-07-09T23:35:10.158765271Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 9 23:35:10.249434 systemd-networkd[1398]: cali368d7169f83: Link UP Jul 9 23:35:10.249616 systemd-networkd[1398]: cali368d7169f83: Gained carrier Jul 9 23:35:10.368718 kubelet[1791]: E0709 23:35:10.368481 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:09.866 [INFO][2748] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:09.884 [INFO][2748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0 whisker-5888cd88b7- calico-system 2f9b95c7-7584-46e8-96cc-f70c580e69a9 1218 0 2025-07-09 23:35:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5888cd88b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 10.0.0.19 whisker-5888cd88b7-6dcnq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali368d7169f83 [] [] }} ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:09.884 [INFO][2748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2767] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" HandleID="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Workload="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2767] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" HandleID="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Workload="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c41a0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.19", "pod":"whisker-5888cd88b7-6dcnq", "timestamp":"2025-07-09 23:35:10.015178549 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.069 [INFO][2767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.069 [INFO][2767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.127 [INFO][2767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.135 [INFO][2767] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.142 [INFO][2767] ipam/ipam.go 511: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.145 [INFO][2767] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.150 [INFO][2767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.150 [INFO][2767] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.155 [INFO][2767] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3 Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.169 [INFO][2767] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.246 [INFO][2767] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.2/26] block=192.168.37.0/26 handle="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.246 [INFO][2767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.2/26] handle="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" host="10.0.0.19" Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.246 [INFO][2767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:35:10.405111 containerd[1484]: 2025-07-09 23:35:10.246 [INFO][2767] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.2/26] IPv6=[] ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" HandleID="k8s-pod-network.97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Workload="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" Jul 9 23:35:10.405678 containerd[1484]: 2025-07-09 23:35:10.247 [INFO][2748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0", GenerateName:"whisker-5888cd88b7-", Namespace:"calico-system", SelfLink:"", UID:"2f9b95c7-7584-46e8-96cc-f70c580e69a9", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5888cd88b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"whisker-5888cd88b7-6dcnq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali368d7169f83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:10.405678 containerd[1484]: 2025-07-09 23:35:10.247 [INFO][2748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.2/32] ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" Jul 9 23:35:10.405678 containerd[1484]: 2025-07-09 23:35:10.248 [INFO][2748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali368d7169f83 ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" Jul 9 23:35:10.405678 containerd[1484]: 2025-07-09 23:35:10.249 [INFO][2748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" Jul 9 23:35:10.405678 containerd[1484]: 2025-07-09 23:35:10.250 [INFO][2748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0", GenerateName:"whisker-5888cd88b7-", Namespace:"calico-system", SelfLink:"", UID:"2f9b95c7-7584-46e8-96cc-f70c580e69a9", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5888cd88b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3", Pod:"whisker-5888cd88b7-6dcnq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali368d7169f83", MAC:"42:92:15:f4:a4:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:10.405678 containerd[1484]: 2025-07-09 23:35:10.403 [INFO][2748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3" Namespace="calico-system" Pod="whisker-5888cd88b7-6dcnq" WorkloadEndpoint="10.0.0.19-k8s-whisker--5888cd88b7--6dcnq-eth0" Jul 9 23:35:10.573774 containerd[1484]: time="2025-07-09T23:35:10.573681697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:35:10.573774 containerd[1484]: time="2025-07-09T23:35:10.573743137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:35:10.573774 containerd[1484]: time="2025-07-09T23:35:10.573758557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:10.573924 containerd[1484]: time="2025-07-09T23:35:10.573836058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:10.594412 systemd[1]: Started cri-containerd-97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3.scope - libcontainer container 97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3. Jul 9 23:35:10.604884 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:35:10.613236 systemd-networkd[1398]: calib4ee17e4aa4: Link UP Jul 9 23:35:10.614249 systemd-networkd[1398]: calib4ee17e4aa4: Gained carrier Jul 9 23:35:10.627305 containerd[1484]: time="2025-07-09T23:35:10.627117379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5888cd88b7-6dcnq,Uid:2f9b95c7-7584-46e8-96cc-f70c580e69a9,Namespace:calico-system,Attempt:4,} returns sandbox id \"97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3\"" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:09.855 [INFO][2723] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:09.885 [INFO][2723] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-csi--node--driver--j68cs-eth0 csi-node-driver- calico-system 9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72 1098 0 2025-07-09 23:34:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.19 csi-node-driver-j68cs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib4ee17e4aa4 [] [] }} ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:09.885 [INFO][2723] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" HandleID="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Workload="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" HandleID="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Workload="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.19", "pod":"csi-node-driver-j68cs", "timestamp":"2025-07-09 23:35:10.015180832 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.015 [INFO][2769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.246 [INFO][2769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.246 [INFO][2769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.405 [INFO][2769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.412 [INFO][2769] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.417 [INFO][2769] ipam/ipam.go 511: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.420 [INFO][2769] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.423 [INFO][2769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.423 [INFO][2769] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.425 [INFO][2769] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.462 [INFO][2769] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.608 [INFO][2769] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.3/26] block=192.168.37.0/26 handle="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.608 [INFO][2769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.3/26] handle="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" host="10.0.0.19" Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.608 [INFO][2769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:35:10.640081 containerd[1484]: 2025-07-09 23:35:10.608 [INFO][2769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.3/26] IPv6=[] ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" HandleID="k8s-pod-network.767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Workload="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" Jul 9 23:35:10.640826 containerd[1484]: 2025-07-09 23:35:10.610 [INFO][2723] cni-plugin/k8s.go 418: Populated endpoint ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--j68cs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 34, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"csi-node-driver-j68cs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib4ee17e4aa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:10.640826 containerd[1484]: 2025-07-09 23:35:10.610 [INFO][2723] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.3/32] ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" Jul 9 23:35:10.640826 containerd[1484]: 2025-07-09 23:35:10.610 [INFO][2723] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4ee17e4aa4 ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" Jul 9 23:35:10.640826 containerd[1484]: 2025-07-09 23:35:10.614 [INFO][2723] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" Jul 9 23:35:10.640826 containerd[1484]: 2025-07-09 23:35:10.614 [INFO][2723] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--j68cs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 34, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b", Pod:"csi-node-driver-j68cs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib4ee17e4aa4", MAC:"2e:b6:01:1f:d4:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:10.640826 containerd[1484]: 2025-07-09 23:35:10.637 [INFO][2723] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b" Namespace="calico-system" Pod="csi-node-driver-j68cs" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--j68cs-eth0" Jul 9 23:35:10.658235 containerd[1484]: time="2025-07-09T23:35:10.657870282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:35:10.658235 containerd[1484]: time="2025-07-09T23:35:10.657944619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:35:10.658235 containerd[1484]: time="2025-07-09T23:35:10.657978543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:10.658545 containerd[1484]: time="2025-07-09T23:35:10.658149646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:10.697970 systemd[1]: Started cri-containerd-767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b.scope - libcontainer container 767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b. Jul 9 23:35:10.730427 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:35:10.755040 containerd[1484]: time="2025-07-09T23:35:10.754996058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j68cs,Uid:9a1d5f75-fc9c-42c6-bd3d-df9580bb8b72,Namespace:calico-system,Attempt:4,} returns sandbox id \"767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b\"" Jul 9 23:35:10.776640 kubelet[1791]: I0709 23:35:10.776598 1791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:35:10.895198 kernel: bpftool[3060]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 9 23:35:11.059289 systemd-networkd[1398]: vxlan.calico: Link UP Jul 9 23:35:11.059301 systemd-networkd[1398]: vxlan.calico: Gained carrier Jul 9 23:35:11.369796 kubelet[1791]: E0709 23:35:11.369662 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:11.724223 systemd-networkd[1398]: cali368d7169f83: Gained IPv6LL Jul 9 23:35:11.978463 systemd-networkd[1398]: cali92aed0a5738: Gained IPv6LL Jul 9 23:35:12.298849 systemd-networkd[1398]: calib4ee17e4aa4: Gained IPv6LL Jul 9 23:35:12.361318 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Jul 9 23:35:12.369876 kubelet[1791]: E0709 23:35:12.369827 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:12.467834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149464103.mount: Deactivated successfully. Jul 9 23:35:13.355274 kubelet[1791]: E0709 23:35:13.355226 1791 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:13.370613 kubelet[1791]: E0709 23:35:13.370565 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:13.440006 containerd[1484]: time="2025-07-09T23:35:13.439946430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:13.440421 containerd[1484]: time="2025-07-09T23:35:13.440392149Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69964585" Jul 9 23:35:13.441294 containerd[1484]: time="2025-07-09T23:35:13.441258200Z" level=info msg="ImageCreate event name:\"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:13.445344 containerd[1484]: time="2025-07-09T23:35:13.445279922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:13.446431 containerd[1484]: time="2025-07-09T23:35:13.445853899Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\", size \"69964463\" in 3.287052262s" Jul 9 23:35:13.446431 containerd[1484]: time="2025-07-09T23:35:13.445890898Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 9 23:35:13.447430 containerd[1484]: time="2025-07-09T23:35:13.447397598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 23:35:13.448469 containerd[1484]: time="2025-07-09T23:35:13.448335686Z" level=info msg="CreateContainer within sandbox \"813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 9 23:35:13.460724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899782297.mount: Deactivated successfully. Jul 9 23:35:13.461225 containerd[1484]: time="2025-07-09T23:35:13.460982198Z" level=info msg="CreateContainer within sandbox \"813cc146b844b5a215b61b506360c53c0d740ba103253d7f928a6ec54dd60c45\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9956dac56c004dd3e964d050b1a3387a47515c6fefdd0b29ccf4ee4c2077a80a\"" Jul 9 23:35:13.461616 containerd[1484]: time="2025-07-09T23:35:13.461589250Z" level=info msg="StartContainer for \"9956dac56c004dd3e964d050b1a3387a47515c6fefdd0b29ccf4ee4c2077a80a\"" Jul 9 23:35:13.546352 systemd[1]: Started cri-containerd-9956dac56c004dd3e964d050b1a3387a47515c6fefdd0b29ccf4ee4c2077a80a.scope - libcontainer container 9956dac56c004dd3e964d050b1a3387a47515c6fefdd0b29ccf4ee4c2077a80a. Jul 9 23:35:13.575580 containerd[1484]: time="2025-07-09T23:35:13.575462514Z" level=info msg="StartContainer for \"9956dac56c004dd3e964d050b1a3387a47515c6fefdd0b29ccf4ee4c2077a80a\" returns successfully" Jul 9 23:35:14.370738 kubelet[1791]: E0709 23:35:14.370686 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:14.612652 containerd[1484]: time="2025-07-09T23:35:14.612595534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:14.614049 containerd[1484]: time="2025-07-09T23:35:14.613654801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 9 23:35:14.617220 containerd[1484]: time="2025-07-09T23:35:14.616579388Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:14.619664 containerd[1484]: time="2025-07-09T23:35:14.619508740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:14.620357 containerd[1484]: time="2025-07-09T23:35:14.620202118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.172758312s" Jul 9 23:35:14.620357 containerd[1484]: time="2025-07-09T23:35:14.620244601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 9 23:35:14.621942 containerd[1484]: time="2025-07-09T23:35:14.621724252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 23:35:14.623652 containerd[1484]: time="2025-07-09T23:35:14.623388209Z" level=info msg="CreateContainer within sandbox \"97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 23:35:14.639007 containerd[1484]: time="2025-07-09T23:35:14.638945565Z" level=info msg="CreateContainer within sandbox \"97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"dd2acf7d636077f66b4c7569bc04878dd325f58b03ca2bb738edae8f8f241617\"" Jul 9 23:35:14.639557 containerd[1484]: time="2025-07-09T23:35:14.639534398Z" level=info msg="StartContainer for \"dd2acf7d636077f66b4c7569bc04878dd325f58b03ca2bb738edae8f8f241617\"" Jul 9 23:35:14.678421 systemd[1]: Started cri-containerd-dd2acf7d636077f66b4c7569bc04878dd325f58b03ca2bb738edae8f8f241617.scope - libcontainer container dd2acf7d636077f66b4c7569bc04878dd325f58b03ca2bb738edae8f8f241617. Jul 9 23:35:14.725121 containerd[1484]: time="2025-07-09T23:35:14.725076273Z" level=info msg="StartContainer for \"dd2acf7d636077f66b4c7569bc04878dd325f58b03ca2bb738edae8f8f241617\" returns successfully" Jul 9 23:35:15.371677 kubelet[1791]: E0709 23:35:15.371625 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:15.686131 containerd[1484]: time="2025-07-09T23:35:15.685992199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:15.691625 containerd[1484]: time="2025-07-09T23:35:15.691325558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 9 23:35:15.694407 containerd[1484]: time="2025-07-09T23:35:15.694050051Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:15.697082 containerd[1484]: time="2025-07-09T23:35:15.696821350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:15.697785 containerd[1484]: time="2025-07-09T23:35:15.697655458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.075889204s" Jul 9 23:35:15.697785 containerd[1484]: time="2025-07-09T23:35:15.697693213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 9 23:35:15.699340 containerd[1484]: time="2025-07-09T23:35:15.699303214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 23:35:15.700198 containerd[1484]: time="2025-07-09T23:35:15.700143328Z" level=info msg="CreateContainer within sandbox \"767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 23:35:15.726452 containerd[1484]: time="2025-07-09T23:35:15.726389323Z" level=info msg="CreateContainer within sandbox \"767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eabc83d8ee5d3b597266b5d1bced0f004add708c5732722158e9750efba98ad8\"" Jul 9 23:35:15.727611 containerd[1484]: time="2025-07-09T23:35:15.726986727Z" level=info msg="StartContainer for \"eabc83d8ee5d3b597266b5d1bced0f004add708c5732722158e9750efba98ad8\"" Jul 9 23:35:15.757278 systemd[1]: run-containerd-runc-k8s.io-eabc83d8ee5d3b597266b5d1bced0f004add708c5732722158e9750efba98ad8-runc.CGRZGG.mount: Deactivated successfully. Jul 9 23:35:15.769450 systemd[1]: Started cri-containerd-eabc83d8ee5d3b597266b5d1bced0f004add708c5732722158e9750efba98ad8.scope - libcontainer container eabc83d8ee5d3b597266b5d1bced0f004add708c5732722158e9750efba98ad8. Jul 9 23:35:15.817199 containerd[1484]: time="2025-07-09T23:35:15.817014138Z" level=info msg="StartContainer for \"eabc83d8ee5d3b597266b5d1bced0f004add708c5732722158e9750efba98ad8\" returns successfully" Jul 9 23:35:16.372146 kubelet[1791]: E0709 23:35:16.372091 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:16.961424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860403295.mount: Deactivated successfully. Jul 9 23:35:16.985622 containerd[1484]: time="2025-07-09T23:35:16.985550557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:16.986692 containerd[1484]: time="2025-07-09T23:35:16.986625830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 9 23:35:16.988896 containerd[1484]: time="2025-07-09T23:35:16.988840271Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:17.003118 containerd[1484]: time="2025-07-09T23:35:17.002142719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:17.003118 containerd[1484]: time="2025-07-09T23:35:17.002963521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.30362091s" Jul 9 23:35:17.003118 containerd[1484]: time="2025-07-09T23:35:17.002996588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 9 23:35:17.004398 containerd[1484]: time="2025-07-09T23:35:17.004344668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 23:35:17.005701 containerd[1484]: time="2025-07-09T23:35:17.005668567Z" level=info msg="CreateContainer within sandbox \"97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 23:35:17.132097 containerd[1484]: time="2025-07-09T23:35:17.132052598Z" level=info msg="CreateContainer within sandbox \"97aa205f80b217286f635632103c1f8d7631f359fa65c035054f87d0021eb7f3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"284388822154eaf3f88ceefefa5e9ade6563dd629f0a758de765d28f8ab3c6e3\"" Jul 9 23:35:17.132965 containerd[1484]: time="2025-07-09T23:35:17.132927204Z" level=info msg="StartContainer for \"284388822154eaf3f88ceefefa5e9ade6563dd629f0a758de765d28f8ab3c6e3\"" Jul 9 23:35:17.159391 systemd[1]: Started cri-containerd-284388822154eaf3f88ceefefa5e9ade6563dd629f0a758de765d28f8ab3c6e3.scope - libcontainer container 284388822154eaf3f88ceefefa5e9ade6563dd629f0a758de765d28f8ab3c6e3. Jul 9 23:35:17.193131 containerd[1484]: time="2025-07-09T23:35:17.193086441Z" level=info msg="StartContainer for \"284388822154eaf3f88ceefefa5e9ade6563dd629f0a758de765d28f8ab3c6e3\" returns successfully" Jul 9 23:35:17.373153 kubelet[1791]: E0709 23:35:17.373107 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:17.825804 kubelet[1791]: I0709 23:35:17.825745 1791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5888cd88b7-6dcnq" podStartSLOduration=7.450026541 podStartE2EDuration="13.825728716s" podCreationTimestamp="2025-07-09 23:35:04 +0000 UTC" firstStartedPulling="2025-07-09 23:35:10.628400492 +0000 UTC m=+18.570592618" lastFinishedPulling="2025-07-09 23:35:17.004102667 +0000 UTC m=+24.946294793" observedRunningTime="2025-07-09 23:35:17.825507933 +0000 UTC m=+25.767700059" watchObservedRunningTime="2025-07-09 23:35:17.825728716 +0000 UTC m=+25.767920842" Jul 9 23:35:17.826247 kubelet[1791]: I0709 23:35:17.826076 1791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-gqh5f" podStartSLOduration=7.537535287 podStartE2EDuration="10.826066397s" podCreationTimestamp="2025-07-09 23:35:07 +0000 UTC" firstStartedPulling="2025-07-09 23:35:10.158362466 +0000 UTC m=+18.100554592" lastFinishedPulling="2025-07-09 23:35:13.446893576 +0000 UTC m=+21.389085702" observedRunningTime="2025-07-09 23:35:13.796872072 +0000 UTC m=+21.739064158" watchObservedRunningTime="2025-07-09 23:35:17.826066397 +0000 UTC m=+25.768258523" Jul 9 23:35:18.156591 containerd[1484]: time="2025-07-09T23:35:18.156462902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:18.157889 containerd[1484]: time="2025-07-09T23:35:18.157834570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 9 23:35:18.158796 containerd[1484]: time="2025-07-09T23:35:18.158741836Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:18.161122 containerd[1484]: time="2025-07-09T23:35:18.161070049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:18.161706 containerd[1484]: time="2025-07-09T23:35:18.161669476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.157274567s" Jul 9 23:35:18.162071 containerd[1484]: time="2025-07-09T23:35:18.161706945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 9 23:35:18.172339 containerd[1484]: time="2025-07-09T23:35:18.172283139Z" level=info msg="CreateContainer within sandbox \"767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 23:35:18.192188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514087536.mount: Deactivated successfully. Jul 9 23:35:18.194223 containerd[1484]: time="2025-07-09T23:35:18.193264714Z" level=info msg="CreateContainer within sandbox \"767a9adf7d9d24e7e3446446d4f1887a07946b26414a0410cbad019c038e899b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bff98b15b2e2a72a3263d3c86bfeb19f78e9a2b5e5cb321d7fef11eeaf81c827\"" Jul 9 23:35:18.194223 containerd[1484]: time="2025-07-09T23:35:18.193795528Z" level=info msg="StartContainer for \"bff98b15b2e2a72a3263d3c86bfeb19f78e9a2b5e5cb321d7fef11eeaf81c827\"" Jul 9 23:35:18.230386 systemd[1]: Started cri-containerd-bff98b15b2e2a72a3263d3c86bfeb19f78e9a2b5e5cb321d7fef11eeaf81c827.scope - libcontainer container bff98b15b2e2a72a3263d3c86bfeb19f78e9a2b5e5cb321d7fef11eeaf81c827. Jul 9 23:35:18.269152 containerd[1484]: time="2025-07-09T23:35:18.269087787Z" level=info msg="StartContainer for \"bff98b15b2e2a72a3263d3c86bfeb19f78e9a2b5e5cb321d7fef11eeaf81c827\" returns successfully" Jul 9 23:35:18.373316 kubelet[1791]: E0709 23:35:18.373265 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:18.669470 kubelet[1791]: I0709 23:35:18.669416 1791 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 23:35:18.669470 kubelet[1791]: I0709 23:35:18.669470 1791 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 23:35:18.838491 kubelet[1791]: I0709 23:35:18.838416 1791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j68cs" podStartSLOduration=18.426707108 podStartE2EDuration="25.83839134s" podCreationTimestamp="2025-07-09 23:34:53 +0000 UTC" firstStartedPulling="2025-07-09 23:35:10.758664041 +0000 UTC m=+18.700856127" lastFinishedPulling="2025-07-09 23:35:18.170348233 +0000 UTC m=+26.112540359" observedRunningTime="2025-07-09 23:35:18.838344863 +0000 UTC m=+26.780536990" watchObservedRunningTime="2025-07-09 23:35:18.83839134 +0000 UTC m=+26.780583466" Jul 9 23:35:19.373780 kubelet[1791]: E0709 23:35:19.373717 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:19.812854 systemd[1]: Created slice kubepods-besteffort-pode86468f9_f90c_4b62_a2aa_1872c35890b6.slice - libcontainer container kubepods-besteffort-pode86468f9_f90c_4b62_a2aa_1872c35890b6.slice. Jul 9 23:35:19.897222 kubelet[1791]: I0709 23:35:19.897101 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e86468f9-f90c-4b62-a2aa-1872c35890b6-data\") pod \"nfs-server-provisioner-0\" (UID: \"e86468f9-f90c-4b62-a2aa-1872c35890b6\") " pod="default/nfs-server-provisioner-0" Jul 9 23:35:19.897222 kubelet[1791]: I0709 23:35:19.897216 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg74w\" (UniqueName: \"kubernetes.io/projected/e86468f9-f90c-4b62-a2aa-1872c35890b6-kube-api-access-hg74w\") pod \"nfs-server-provisioner-0\" (UID: \"e86468f9-f90c-4b62-a2aa-1872c35890b6\") " pod="default/nfs-server-provisioner-0" Jul 9 23:35:20.115844 containerd[1484]: time="2025-07-09T23:35:20.115663847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e86468f9-f90c-4b62-a2aa-1872c35890b6,Namespace:default,Attempt:0,}" Jul 9 23:35:20.287407 systemd-networkd[1398]: cali60e51b789ff: Link UP Jul 9 23:35:20.287613 systemd-networkd[1398]: cali60e51b789ff: Gained carrier Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.189 [INFO][3410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default e86468f9-f90c-4b62-a2aa-1872c35890b6 1376 0 2025-07-09 23:35:19 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.19 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.189 [INFO][3410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.218 [INFO][3425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" HandleID="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Workload="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.218 [INFO][3425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" HandleID="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Workload="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a4200), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.19", "pod":"nfs-server-provisioner-0", "timestamp":"2025-07-09 23:35:20.217990954 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.218 [INFO][3425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.218 [INFO][3425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.218 [INFO][3425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.240 [INFO][3425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.247 [INFO][3425] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.254 [INFO][3425] ipam/ipam.go 511: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.258 [INFO][3425] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.261 [INFO][3425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.261 [INFO][3425] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.267 [INFO][3425] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4 Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.273 [INFO][3425] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.280 [INFO][3425] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.4/26] block=192.168.37.0/26 handle="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.280 [INFO][3425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.4/26] handle="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" host="10.0.0.19" Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.280 [INFO][3425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:35:20.303545 containerd[1484]: 2025-07-09 23:35:20.280 [INFO][3425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.4/26] IPv6=[] ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" HandleID="k8s-pod-network.425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Workload="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" Jul 9 23:35:20.304221 containerd[1484]: 2025-07-09 23:35:20.283 [INFO][3410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e86468f9-f90c-4b62-a2aa-1872c35890b6", ResourceVersion:"1376", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:20.304221 containerd[1484]: 2025-07-09 23:35:20.283 [INFO][3410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.4/32] ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" Jul 9 23:35:20.304221 containerd[1484]: 2025-07-09 23:35:20.283 [INFO][3410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" Jul 9 23:35:20.304221 containerd[1484]: 2025-07-09 23:35:20.287 [INFO][3410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" Jul 9 23:35:20.304368 containerd[1484]: 2025-07-09 23:35:20.289 [INFO][3410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e86468f9-f90c-4b62-a2aa-1872c35890b6", ResourceVersion:"1376", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ee:9e:6c:fa:34:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:20.304368 containerd[1484]: 2025-07-09 23:35:20.298 [INFO][3410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" Jul 9 23:35:20.351545 containerd[1484]: time="2025-07-09T23:35:20.351051332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:35:20.351545 containerd[1484]: time="2025-07-09T23:35:20.351454288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:35:20.351545 containerd[1484]: time="2025-07-09T23:35:20.351467017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:20.351849 containerd[1484]: time="2025-07-09T23:35:20.351639095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:20.375319 kubelet[1791]: E0709 23:35:20.374533 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:20.378114 systemd[1]: Started cri-containerd-425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4.scope - libcontainer container 425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4. Jul 9 23:35:20.403104 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:35:20.421647 containerd[1484]: time="2025-07-09T23:35:20.421598050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e86468f9-f90c-4b62-a2aa-1872c35890b6,Namespace:default,Attempt:0,} returns sandbox id \"425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4\"" Jul 9 23:35:20.423333 containerd[1484]: time="2025-07-09T23:35:20.423301616Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 9 23:35:21.374924 kubelet[1791]: E0709 23:35:21.374637 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:21.449372 systemd-networkd[1398]: cali60e51b789ff: Gained IPv6LL Jul 9 23:35:22.374944 kubelet[1791]: E0709 23:35:22.374886 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:22.789981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093024770.mount: Deactivated successfully. Jul 9 23:35:23.375781 kubelet[1791]: E0709 23:35:23.375653 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:24.376085 kubelet[1791]: E0709 23:35:24.376043 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:24.437225 containerd[1484]: time="2025-07-09T23:35:24.436936994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:24.440362 containerd[1484]: time="2025-07-09T23:35:24.440291687Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jul 9 23:35:24.444072 containerd[1484]: time="2025-07-09T23:35:24.444000928Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:24.447652 containerd[1484]: time="2025-07-09T23:35:24.447595029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:24.448557 containerd[1484]: time="2025-07-09T23:35:24.448490182Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.010148235s" Jul 9 23:35:24.448748 containerd[1484]: time="2025-07-09T23:35:24.448642463Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 9 23:35:24.450859 containerd[1484]: time="2025-07-09T23:35:24.450819133Z" level=info msg="CreateContainer within sandbox \"425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 9 23:35:24.470004 containerd[1484]: time="2025-07-09T23:35:24.469861161Z" level=info msg="CreateContainer within sandbox \"425917671be4973f13d42ca3253d72e5bc4c45e4a72ba3ec5ae8f81b6c155ae4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f7b44687a3ceff0ec712d994de98e8f03c332757dc49e5eced38a3b952f59393\"" Jul 9 23:35:24.470885 containerd[1484]: time="2025-07-09T23:35:24.470852405Z" level=info msg="StartContainer for \"f7b44687a3ceff0ec712d994de98e8f03c332757dc49e5eced38a3b952f59393\"" Jul 9 23:35:24.505417 systemd[1]: Started cri-containerd-f7b44687a3ceff0ec712d994de98e8f03c332757dc49e5eced38a3b952f59393.scope - libcontainer container f7b44687a3ceff0ec712d994de98e8f03c332757dc49e5eced38a3b952f59393. Jul 9 23:35:24.563224 containerd[1484]: time="2025-07-09T23:35:24.562598313Z" level=info msg="StartContainer for \"f7b44687a3ceff0ec712d994de98e8f03c332757dc49e5eced38a3b952f59393\" returns successfully" Jul 9 23:35:25.116196 kubelet[1791]: I0709 23:35:25.115938 1791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:35:25.334852 kubelet[1791]: I0709 23:35:25.334771 1791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.308370514 podStartE2EDuration="6.334742988s" podCreationTimestamp="2025-07-09 23:35:19 +0000 UTC" firstStartedPulling="2025-07-09 23:35:20.423032392 +0000 UTC m=+28.365224518" lastFinishedPulling="2025-07-09 23:35:24.449404866 +0000 UTC m=+32.391596992" observedRunningTime="2025-07-09 23:35:24.854587373 +0000 UTC m=+32.796779579" watchObservedRunningTime="2025-07-09 23:35:25.334742988 +0000 UTC m=+33.276935114" Jul 9 23:35:25.377193 kubelet[1791]: E0709 23:35:25.377038 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:26.377994 kubelet[1791]: E0709 23:35:26.377947 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:27.378363 kubelet[1791]: E0709 23:35:27.378308 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:28.379181 kubelet[1791]: E0709 23:35:28.379108 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:29.028926 update_engine[1460]: I20250709 23:35:29.028317 1460 update_attempter.cc:509] Updating boot flags... Jul 9 23:35:29.076259 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3642) Jul 9 23:35:29.125339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3643) Jul 9 23:35:29.163196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3643) Jul 9 23:35:29.379650 kubelet[1791]: E0709 23:35:29.379515 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:30.382757 kubelet[1791]: E0709 23:35:30.380334 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:31.381380 kubelet[1791]: E0709 23:35:31.381329 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:32.381740 kubelet[1791]: E0709 23:35:32.381680 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:33.355855 kubelet[1791]: E0709 23:35:33.355802 1791 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:33.382023 kubelet[1791]: E0709 23:35:33.381942 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:34.382834 kubelet[1791]: E0709 23:35:34.382759 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:34.949211 systemd[1]: Created slice kubepods-besteffort-pod4c60e0ee_9e71_4cef_a8a6_3db4117a6e94.slice - libcontainer container kubepods-besteffort-pod4c60e0ee_9e71_4cef_a8a6_3db4117a6e94.slice. Jul 9 23:35:35.014510 kubelet[1791]: I0709 23:35:35.014439 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-761eff5c-1cf9-45ad-9063-7c666e149596\" (UniqueName: \"kubernetes.io/nfs/4c60e0ee-9e71-4cef-a8a6-3db4117a6e94-pvc-761eff5c-1cf9-45ad-9063-7c666e149596\") pod \"test-pod-1\" (UID: \"4c60e0ee-9e71-4cef-a8a6-3db4117a6e94\") " pod="default/test-pod-1" Jul 9 23:35:35.014510 kubelet[1791]: I0709 23:35:35.014523 1791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdcw\" (UniqueName: \"kubernetes.io/projected/4c60e0ee-9e71-4cef-a8a6-3db4117a6e94-kube-api-access-5gdcw\") pod \"test-pod-1\" (UID: \"4c60e0ee-9e71-4cef-a8a6-3db4117a6e94\") " pod="default/test-pod-1" Jul 9 23:35:35.155396 kernel: FS-Cache: Loaded Jul 9 23:35:35.180312 kernel: RPC: Registered named UNIX socket transport module. Jul 9 23:35:35.180438 kernel: RPC: Registered udp transport module. Jul 9 23:35:35.180459 kernel: RPC: Registered tcp transport module. Jul 9 23:35:35.181381 kernel: RPC: Registered tcp-with-tls transport module. Jul 9 23:35:35.181429 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 9 23:35:35.347243 kernel: NFS: Registering the id_resolver key type Jul 9 23:35:35.347465 kernel: Key type id_resolver registered Jul 9 23:35:35.347487 kernel: Key type id_legacy registered Jul 9 23:35:35.373123 nfsidmap[3677]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 9 23:35:35.375955 nfsidmap[3678]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 9 23:35:35.383207 kubelet[1791]: E0709 23:35:35.383127 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:35.553550 containerd[1484]: time="2025-07-09T23:35:35.553101862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4c60e0ee-9e71-4cef-a8a6-3db4117a6e94,Namespace:default,Attempt:0,}" Jul 9 23:35:35.700610 systemd-networkd[1398]: cali5ec59c6bf6e: Link UP Jul 9 23:35:35.703999 systemd-networkd[1398]: cali5ec59c6bf6e: Gained carrier Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.617 [INFO][3681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-test--pod--1-eth0 default 4c60e0ee-9e71-4cef-a8a6-3db4117a6e94 1459 0 2025-07-09 23:35:20 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.19 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.617 [INFO][3681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.650 [INFO][3694] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" HandleID="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Workload="10.0.0.19-k8s-test--pod--1-eth0" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.650 [INFO][3694] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" HandleID="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Workload="10.0.0.19-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b4630), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.19", "pod":"test-pod-1", "timestamp":"2025-07-09 23:35:35.650267448 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.650 [INFO][3694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.650 [INFO][3694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.650 [INFO][3694] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.660 [INFO][3694] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.666 [INFO][3694] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.672 [INFO][3694] ipam/ipam.go 511: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.675 [INFO][3694] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.678 [INFO][3694] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.678 [INFO][3694] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.680 [INFO][3694] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324 Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.686 [INFO][3694] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.693 [INFO][3694] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.5/26] block=192.168.37.0/26 handle="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.693 [INFO][3694] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.5/26] handle="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" host="10.0.0.19" Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.693 [INFO][3694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:35:35.716763 containerd[1484]: 2025-07-09 23:35:35.693 [INFO][3694] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.5/26] IPv6=[] ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" HandleID="k8s-pod-network.593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Workload="10.0.0.19-k8s-test--pod--1-eth0" Jul 9 23:35:35.717499 containerd[1484]: 2025-07-09 23:35:35.696 [INFO][3681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4c60e0ee-9e71-4cef-a8a6-3db4117a6e94", ResourceVersion:"1459", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:35.717499 containerd[1484]: 2025-07-09 23:35:35.696 [INFO][3681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.5/32] ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" Jul 9 23:35:35.717499 containerd[1484]: 2025-07-09 23:35:35.696 [INFO][3681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" Jul 9 23:35:35.717499 containerd[1484]: 2025-07-09 23:35:35.701 [INFO][3681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" Jul 9 23:35:35.717499 containerd[1484]: 2025-07-09 23:35:35.702 [INFO][3681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4c60e0ee-9e71-4cef-a8a6-3db4117a6e94", ResourceVersion:"1459", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"86:22:b1:25:e1:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:35:35.717499 containerd[1484]: 2025-07-09 23:35:35.713 [INFO][3681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" Jul 9 23:35:35.739777 containerd[1484]: time="2025-07-09T23:35:35.739538101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:35:35.739777 containerd[1484]: time="2025-07-09T23:35:35.739594876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:35:35.739777 containerd[1484]: time="2025-07-09T23:35:35.739606719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:35.740275 containerd[1484]: time="2025-07-09T23:35:35.740046833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:35:35.761416 systemd[1]: Started cri-containerd-593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324.scope - libcontainer container 593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324. Jul 9 23:35:35.772961 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:35:35.794807 containerd[1484]: time="2025-07-09T23:35:35.794759940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4c60e0ee-9e71-4cef-a8a6-3db4117a6e94,Namespace:default,Attempt:0,} returns sandbox id \"593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324\"" Jul 9 23:35:35.796558 containerd[1484]: time="2025-07-09T23:35:35.796361837Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 9 23:35:36.049762 containerd[1484]: time="2025-07-09T23:35:36.049693803Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:35:36.052244 containerd[1484]: time="2025-07-09T23:35:36.052181769Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 9 23:35:36.055453 containerd[1484]: time="2025-07-09T23:35:36.055405915Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\", size \"69964463\" in 259.006509ms" Jul 9 23:35:36.055453 containerd[1484]: time="2025-07-09T23:35:36.055453007Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 9 23:35:36.057503 containerd[1484]: time="2025-07-09T23:35:36.057451334Z" level=info msg="CreateContainer within sandbox \"593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 9 23:35:36.080179 containerd[1484]: time="2025-07-09T23:35:36.080122380Z" level=info msg="CreateContainer within sandbox \"593100b899db95faf940dc9ec7e74c290feeac1937a87f646a9723d852b00324\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"43c20289b3cb1be4bbcd3bb66726ba51ea8949e625fe7410570aa8832f61d807\"" Jul 9 23:35:36.080880 containerd[1484]: time="2025-07-09T23:35:36.080849678Z" level=info msg="StartContainer for \"43c20289b3cb1be4bbcd3bb66726ba51ea8949e625fe7410570aa8832f61d807\"" Jul 9 23:35:36.118448 systemd[1]: Started cri-containerd-43c20289b3cb1be4bbcd3bb66726ba51ea8949e625fe7410570aa8832f61d807.scope - libcontainer container 43c20289b3cb1be4bbcd3bb66726ba51ea8949e625fe7410570aa8832f61d807. Jul 9 23:35:36.151653 containerd[1484]: time="2025-07-09T23:35:36.151512584Z" level=info msg="StartContainer for \"43c20289b3cb1be4bbcd3bb66726ba51ea8949e625fe7410570aa8832f61d807\" returns successfully" Jul 9 23:35:36.383594 kubelet[1791]: E0709 23:35:36.383436 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:36.879259 kubelet[1791]: I0709 23:35:36.879185 1791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.619009379 podStartE2EDuration="16.879142688s" podCreationTimestamp="2025-07-09 23:35:20 +0000 UTC" firstStartedPulling="2025-07-09 23:35:35.796071681 +0000 UTC m=+43.738263807" lastFinishedPulling="2025-07-09 23:35:36.05620499 +0000 UTC m=+43.998397116" observedRunningTime="2025-07-09 23:35:36.878076788 +0000 UTC m=+44.820268914" watchObservedRunningTime="2025-07-09 23:35:36.879142688 +0000 UTC m=+44.821334814" Jul 9 23:35:37.065806 systemd-networkd[1398]: cali5ec59c6bf6e: Gained IPv6LL Jul 9 23:35:37.384535 kubelet[1791]: E0709 23:35:37.384488 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 23:35:38.385380 kubelet[1791]: E0709 23:35:38.385322 1791 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"