Mar 20 21:13:39.932124 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 20 21:13:39.932146 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Mar 20 19:37:53 -00 2025 Mar 20 21:13:39.932155 kernel: KASLR enabled Mar 20 21:13:39.932160 kernel: efi: EFI v2.7 by EDK II Mar 20 21:13:39.932166 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40498 Mar 20 21:13:39.932171 kernel: random: crng init done Mar 20 21:13:39.932178 kernel: secureboot: Secure boot disabled Mar 20 21:13:39.932183 kernel: ACPI: Early table checksum verification disabled Mar 20 21:13:39.932189 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 20 21:13:39.932196 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 20 21:13:39.932202 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932207 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932213 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932218 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932225 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932232 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932239 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932244 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932250 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:13:39.932256 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 20 21:13:39.932262 kernel: NUMA: Failed to initialise from firmware Mar 20 21:13:39.932268 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:13:39.932274 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Mar 20 21:13:39.932280 kernel: Zone ranges: Mar 20 21:13:39.932286 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:13:39.932293 kernel: DMA32 empty Mar 20 21:13:39.932298 kernel: Normal empty Mar 20 21:13:39.932304 kernel: Movable zone start for each node Mar 20 21:13:39.932310 kernel: Early memory node ranges Mar 20 21:13:39.932316 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 20 21:13:39.932322 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 20 21:13:39.932328 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 20 21:13:39.932333 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 20 21:13:39.932349 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 20 21:13:39.932355 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 20 21:13:39.932361 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 20 21:13:39.932367 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 20 21:13:39.932375 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 20 21:13:39.932380 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:13:39.932387 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 20 21:13:39.932395 kernel: psci: probing for conduit method from ACPI. Mar 20 21:13:39.932402 kernel: psci: PSCIv1.1 detected in firmware. Mar 20 21:13:39.932408 kernel: psci: Using standard PSCI v0.2 function IDs Mar 20 21:13:39.932415 kernel: psci: Trusted OS migration not required Mar 20 21:13:39.932422 kernel: psci: SMC Calling Convention v1.1 Mar 20 21:13:39.932428 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 20 21:13:39.932435 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 20 21:13:39.932441 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 20 21:13:39.932448 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 20 21:13:39.932454 kernel: Detected PIPT I-cache on CPU0 Mar 20 21:13:39.932460 kernel: CPU features: detected: GIC system register CPU interface Mar 20 21:13:39.932466 kernel: CPU features: detected: Hardware dirty bit management Mar 20 21:13:39.932473 kernel: CPU features: detected: Spectre-v4 Mar 20 21:13:39.932480 kernel: CPU features: detected: Spectre-BHB Mar 20 21:13:39.932486 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 20 21:13:39.932492 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 20 21:13:39.932499 kernel: CPU features: detected: ARM erratum 1418040 Mar 20 21:13:39.932505 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 20 21:13:39.932511 kernel: alternatives: applying boot alternatives Mar 20 21:13:39.932518 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:13:39.932525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:13:39.932531 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:13:39.932538 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:13:39.932544 kernel: Fallback order for Node 0: 0 Mar 20 21:13:39.932552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 20 21:13:39.932558 kernel: Policy zone: DMA Mar 20 21:13:39.932564 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:13:39.932570 kernel: software IO TLB: area num 4. Mar 20 21:13:39.932577 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 20 21:13:39.932583 kernel: Memory: 2387408K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184880K reserved, 0K cma-reserved) Mar 20 21:13:39.932590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:13:39.932596 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:13:39.932604 kernel: rcu: RCU event tracing is enabled. Mar 20 21:13:39.932610 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:13:39.932617 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:13:39.932623 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:13:39.932631 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:13:39.932637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:13:39.932643 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 20 21:13:39.932650 kernel: GICv3: 256 SPIs implemented Mar 20 21:13:39.932656 kernel: GICv3: 0 Extended SPIs implemented Mar 20 21:13:39.932662 kernel: Root IRQ handler: gic_handle_irq Mar 20 21:13:39.932669 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 20 21:13:39.932675 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 20 21:13:39.932681 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 20 21:13:39.932688 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 20 21:13:39.932694 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 20 21:13:39.932702 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 20 21:13:39.932708 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 20 21:13:39.932715 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:13:39.932721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:13:39.932727 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 20 21:13:39.932734 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 20 21:13:39.932740 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 20 21:13:39.932747 kernel: arm-pv: using stolen time PV Mar 20 21:13:39.932753 kernel: Console: colour dummy device 80x25 Mar 20 21:13:39.932760 kernel: ACPI: Core revision 20230628 Mar 20 21:13:39.932766 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 20 21:13:39.932774 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:13:39.932781 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:13:39.932787 kernel: landlock: Up and running. Mar 20 21:13:39.932794 kernel: SELinux: Initializing. Mar 20 21:13:39.932800 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:13:39.932806 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:13:39.932813 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:13:39.932820 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:13:39.932826 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:13:39.932834 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:13:39.932840 kernel: Platform MSI: ITS@0x8080000 domain created Mar 20 21:13:39.932847 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 20 21:13:39.932854 kernel: Remapping and enabling EFI services. Mar 20 21:13:39.932860 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:13:39.932866 kernel: Detected PIPT I-cache on CPU1 Mar 20 21:13:39.932897 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 20 21:13:39.932904 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 20 21:13:39.932910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:13:39.932919 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 20 21:13:39.932926 kernel: Detected PIPT I-cache on CPU2 Mar 20 21:13:39.932937 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 20 21:13:39.932947 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 20 21:13:39.932954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:13:39.932960 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 20 21:13:39.932967 kernel: Detected PIPT I-cache on CPU3 Mar 20 21:13:39.932974 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 20 21:13:39.932981 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 20 21:13:39.932990 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:13:39.932997 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 20 21:13:39.933004 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:13:39.933010 kernel: SMP: Total of 4 processors activated. Mar 20 21:13:39.933017 kernel: CPU features: detected: 32-bit EL0 Support Mar 20 21:13:39.933024 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 20 21:13:39.933031 kernel: CPU features: detected: Common not Private translations Mar 20 21:13:39.933038 kernel: CPU features: detected: CRC32 instructions Mar 20 21:13:39.933046 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 20 21:13:39.933053 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 20 21:13:39.933061 kernel: CPU features: detected: LSE atomic instructions Mar 20 21:13:39.933067 kernel: CPU features: detected: Privileged Access Never Mar 20 21:13:39.933074 kernel: CPU features: detected: RAS Extension Support Mar 20 21:13:39.933081 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 20 21:13:39.933088 kernel: CPU: All CPU(s) started at EL1 Mar 20 21:13:39.933095 kernel: alternatives: applying system-wide alternatives Mar 20 21:13:39.933102 kernel: devtmpfs: initialized Mar 20 21:13:39.933109 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:13:39.933118 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:13:39.933125 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:13:39.933131 kernel: SMBIOS 3.0.0 present. Mar 20 21:13:39.933138 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 20 21:13:39.933145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:13:39.933152 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 20 21:13:39.933159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 20 21:13:39.933167 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 20 21:13:39.933175 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:13:39.933182 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 20 21:13:39.933189 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:13:39.933195 kernel: cpuidle: using governor menu Mar 20 21:13:39.933202 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 20 21:13:39.933209 kernel: ASID allocator initialised with 32768 entries Mar 20 21:13:39.933216 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:13:39.933223 kernel: Serial: AMBA PL011 UART driver Mar 20 21:13:39.933229 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 20 21:13:39.933238 kernel: Modules: 0 pages in range for non-PLT usage Mar 20 21:13:39.933245 kernel: Modules: 509248 pages in range for PLT usage Mar 20 21:13:39.933251 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:13:39.933258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:13:39.933265 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 20 21:13:39.933272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 20 21:13:39.933279 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:13:39.933286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:13:39.933293 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 20 21:13:39.933301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 20 21:13:39.933309 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:13:39.933317 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:13:39.933325 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:13:39.933332 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:13:39.933344 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:13:39.933351 kernel: ACPI: Interpreter enabled Mar 20 21:13:39.933358 kernel: ACPI: Using GIC for interrupt routing Mar 20 21:13:39.933365 kernel: ACPI: MCFG table detected, 1 entries Mar 20 21:13:39.933372 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 20 21:13:39.933381 kernel: printk: console [ttyAMA0] enabled Mar 20 21:13:39.933388 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:13:39.933523 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:13:39.933598 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 20 21:13:39.933689 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 20 21:13:39.933753 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 20 21:13:39.933816 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 20 21:13:39.933827 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 20 21:13:39.933834 kernel: PCI host bridge to bus 0000:00 Mar 20 21:13:39.933969 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 20 21:13:39.934031 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 20 21:13:39.934087 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 20 21:13:39.934144 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:13:39.934227 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 20 21:13:39.934305 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:13:39.934388 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 20 21:13:39.934454 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 20 21:13:39.934520 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:13:39.934583 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:13:39.934647 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 20 21:13:39.934716 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 20 21:13:39.934774 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 20 21:13:39.934831 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 20 21:13:39.934926 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 20 21:13:39.934937 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 20 21:13:39.934944 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 20 21:13:39.934951 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 20 21:13:39.934957 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 20 21:13:39.934967 kernel: iommu: Default domain type: Translated Mar 20 21:13:39.934974 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 20 21:13:39.934981 kernel: efivars: Registered efivars operations Mar 20 21:13:39.934987 kernel: vgaarb: loaded Mar 20 21:13:39.934994 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 20 21:13:39.935001 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:13:39.935008 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:13:39.935015 kernel: pnp: PnP ACPI init Mar 20 21:13:39.935085 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 20 21:13:39.935097 kernel: pnp: PnP ACPI: found 1 devices Mar 20 21:13:39.935104 kernel: NET: Registered PF_INET protocol family Mar 20 21:13:39.935111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:13:39.935118 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:13:39.935125 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:13:39.935132 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:13:39.935139 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:13:39.935146 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:13:39.935154 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:13:39.935161 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:13:39.935168 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:13:39.935175 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:13:39.935182 kernel: kvm [1]: HYP mode not available Mar 20 21:13:39.935188 kernel: Initialise system trusted keyrings Mar 20 21:13:39.935195 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:13:39.935202 kernel: Key type asymmetric registered Mar 20 21:13:39.935209 kernel: Asymmetric key parser 'x509' registered Mar 20 21:13:39.935215 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 20 21:13:39.935224 kernel: io scheduler mq-deadline registered Mar 20 21:13:39.935230 kernel: io scheduler kyber registered Mar 20 21:13:39.935237 kernel: io scheduler bfq registered Mar 20 21:13:39.935244 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 20 21:13:39.935251 kernel: ACPI: button: Power Button [PWRB] Mar 20 21:13:39.935258 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 20 21:13:39.935324 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 20 21:13:39.935333 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:13:39.935346 kernel: thunder_xcv, ver 1.0 Mar 20 21:13:39.935355 kernel: thunder_bgx, ver 1.0 Mar 20 21:13:39.935362 kernel: nicpf, ver 1.0 Mar 20 21:13:39.935369 kernel: nicvf, ver 1.0 Mar 20 21:13:39.935447 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 20 21:13:39.935510 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-20T21:13:39 UTC (1742505219) Mar 20 21:13:39.935519 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 20 21:13:39.935526 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 20 21:13:39.935533 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 20 21:13:39.935542 kernel: watchdog: Hard watchdog permanently disabled Mar 20 21:13:39.935549 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:13:39.935556 kernel: Segment Routing with IPv6 Mar 20 21:13:39.935563 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:13:39.935569 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:13:39.935576 kernel: Key type dns_resolver registered Mar 20 21:13:39.935583 kernel: registered taskstats version 1 Mar 20 21:13:39.935590 kernel: Loading compiled-in X.509 certificates Mar 20 21:13:39.935597 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 3a6f52a6c751e8bbe3389ae978b265effe8f77af' Mar 20 21:13:39.935605 kernel: Key type .fscrypt registered Mar 20 21:13:39.935611 kernel: Key type fscrypt-provisioning registered Mar 20 21:13:39.935618 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:13:39.935625 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:13:39.935632 kernel: ima: No architecture policies found Mar 20 21:13:39.935639 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 20 21:13:39.935646 kernel: clk: Disabling unused clocks Mar 20 21:13:39.935653 kernel: Freeing unused kernel memory: 38464K Mar 20 21:13:39.935661 kernel: Run /init as init process Mar 20 21:13:39.935668 kernel: with arguments: Mar 20 21:13:39.935674 kernel: /init Mar 20 21:13:39.935681 kernel: with environment: Mar 20 21:13:39.935687 kernel: HOME=/ Mar 20 21:13:39.935694 kernel: TERM=linux Mar 20 21:13:39.935701 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:13:39.935708 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:13:39.935718 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:13:39.935727 systemd[1]: Detected virtualization kvm. Mar 20 21:13:39.935735 systemd[1]: Detected architecture arm64. Mar 20 21:13:39.935742 systemd[1]: Running in initrd. Mar 20 21:13:39.935749 systemd[1]: No hostname configured, using default hostname. Mar 20 21:13:39.935757 systemd[1]: Hostname set to . Mar 20 21:13:39.935764 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:13:39.935771 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:13:39.935780 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:13:39.935787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:13:39.935795 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:13:39.935803 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:13:39.935811 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:13:39.935819 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:13:39.935828 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:13:39.935837 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:13:39.935845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:13:39.935852 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:13:39.935860 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:13:39.935868 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:13:39.935891 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:13:39.935902 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:13:39.935911 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:13:39.935919 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:13:39.935929 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:13:39.935936 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:13:39.935944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:13:39.935952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:13:39.935959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:13:39.935970 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:13:39.935978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:13:39.935986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:13:39.935995 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:13:39.936002 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:13:39.936010 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:13:39.936017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:13:39.936025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:13:39.936032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:13:39.936040 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:13:39.936049 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:13:39.936057 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:13:39.936064 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:13:39.936072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:13:39.936079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:13:39.936105 systemd-journald[236]: Collecting audit messages is disabled. Mar 20 21:13:39.936126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:13:39.936135 systemd-journald[236]: Journal started Mar 20 21:13:39.936155 systemd-journald[236]: Runtime Journal (/run/log/journal/1669779a7d1f46a1aa932a2054672a62) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:13:39.921334 systemd-modules-load[237]: Inserted module 'overlay' Mar 20 21:13:39.938813 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:13:39.941895 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:13:39.943513 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 20 21:13:39.944458 kernel: Bridge firewalling registered Mar 20 21:13:39.947259 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:13:39.948486 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:13:39.953165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:13:39.955271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:13:39.963974 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:13:39.965188 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:13:39.967622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:13:39.971067 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:13:39.973440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:13:39.998472 dracut-cmdline[275]: dracut-dracut-053 Mar 20 21:13:40.000869 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:13:40.023037 systemd-resolved[276]: Positive Trust Anchors: Mar 20 21:13:40.023052 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:13:40.023083 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:13:40.029451 systemd-resolved[276]: Defaulting to hostname 'linux'. Mar 20 21:13:40.030423 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:13:40.032663 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:13:40.074903 kernel: SCSI subsystem initialized Mar 20 21:13:40.079893 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:13:40.086901 kernel: iscsi: registered transport (tcp) Mar 20 21:13:40.099912 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:13:40.099956 kernel: QLogic iSCSI HBA Driver Mar 20 21:13:40.140768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:13:40.143008 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:13:40.181122 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:13:40.181161 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:13:40.182272 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:13:40.228913 kernel: raid6: neonx8 gen() 15648 MB/s Mar 20 21:13:40.245915 kernel: raid6: neonx4 gen() 15490 MB/s Mar 20 21:13:40.262907 kernel: raid6: neonx2 gen() 13231 MB/s Mar 20 21:13:40.279899 kernel: raid6: neonx1 gen() 10362 MB/s Mar 20 21:13:40.296905 kernel: raid6: int64x8 gen() 6735 MB/s Mar 20 21:13:40.313897 kernel: raid6: int64x4 gen() 7275 MB/s Mar 20 21:13:40.330897 kernel: raid6: int64x2 gen() 6071 MB/s Mar 20 21:13:40.348016 kernel: raid6: int64x1 gen() 4990 MB/s Mar 20 21:13:40.348028 kernel: raid6: using algorithm neonx8 gen() 15648 MB/s Mar 20 21:13:40.365991 kernel: raid6: .... xor() 11884 MB/s, rmw enabled Mar 20 21:13:40.366004 kernel: raid6: using neon recovery algorithm Mar 20 21:13:40.370905 kernel: xor: measuring software checksum speed Mar 20 21:13:40.372138 kernel: 8regs : 18452 MB/sec Mar 20 21:13:40.372151 kernel: 32regs : 21315 MB/sec Mar 20 21:13:40.375094 kernel: arm64_neon : 1754 MB/sec Mar 20 21:13:40.375116 kernel: xor: using function: 32regs (21315 MB/sec) Mar 20 21:13:40.425894 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:13:40.437934 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:13:40.440650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:13:40.464546 systemd-udevd[461]: Using default interface naming scheme 'v255'. Mar 20 21:13:40.468259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:13:40.472996 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:13:40.496714 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Mar 20 21:13:40.521009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:13:40.522901 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:13:40.569674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:13:40.572585 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:13:40.595071 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:13:40.596307 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:13:40.598272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:13:40.600576 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:13:40.602808 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:13:40.631425 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 20 21:13:40.644122 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:13:40.644226 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:13:40.644237 kernel: GPT:9289727 != 19775487 Mar 20 21:13:40.644246 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:13:40.644261 kernel: GPT:9289727 != 19775487 Mar 20 21:13:40.644271 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:13:40.644279 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:13:40.627666 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:13:40.635262 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:13:40.635322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:13:40.637942 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:13:40.639239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:13:40.639314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:13:40.643977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:13:40.645615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:13:40.663899 kernel: BTRFS: device fsid 892d57a1-84f1-442c-90df-b8383db1b8c3 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (509) Mar 20 21:13:40.665937 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (515) Mar 20 21:13:40.671676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:13:40.680304 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:13:40.693570 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:13:40.701894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:13:40.708689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:13:40.709931 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:13:40.713620 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:13:40.715672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:13:40.734572 disk-uuid[550]: Primary Header is updated. Mar 20 21:13:40.734572 disk-uuid[550]: Secondary Entries is updated. Mar 20 21:13:40.734572 disk-uuid[550]: Secondary Header is updated. Mar 20 21:13:40.741927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:13:40.745820 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:13:41.755909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:13:41.756368 disk-uuid[553]: The operation has completed successfully. Mar 20 21:13:41.787561 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:13:41.787658 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:13:41.817723 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:13:41.831735 sh[570]: Success Mar 20 21:13:41.847931 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 20 21:13:41.879995 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:13:41.883033 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:13:41.898184 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:13:41.906807 kernel: BTRFS info (device dm-0): first mount of filesystem 892d57a1-84f1-442c-90df-b8383db1b8c3 Mar 20 21:13:41.906843 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:13:41.906853 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:13:41.909390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:13:41.909410 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:13:41.918848 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:13:41.920251 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:13:41.920998 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:13:41.923812 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:13:41.946223 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:13:41.946276 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:13:41.946286 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:13:41.948908 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:13:41.953908 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:13:41.958349 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:13:41.960307 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:13:42.024726 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:13:42.029042 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:13:42.071776 systemd-networkd[755]: lo: Link UP Mar 20 21:13:42.071787 systemd-networkd[755]: lo: Gained carrier Mar 20 21:13:42.072683 systemd-networkd[755]: Enumeration completed Mar 20 21:13:42.072792 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:13:42.074906 systemd[1]: Reached target network.target - Network. Mar 20 21:13:42.075128 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:13:42.075132 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:13:42.077495 systemd-networkd[755]: eth0: Link UP Mar 20 21:13:42.083508 ignition[665]: Ignition 2.20.0 Mar 20 21:13:42.077498 systemd-networkd[755]: eth0: Gained carrier Mar 20 21:13:42.083515 ignition[665]: Stage: fetch-offline Mar 20 21:13:42.077505 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:13:42.083545 ignition[665]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:13:42.083553 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:13:42.083695 ignition[665]: parsed url from cmdline: "" Mar 20 21:13:42.083698 ignition[665]: no config URL provided Mar 20 21:13:42.083702 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:13:42.083709 ignition[665]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:13:42.083739 ignition[665]: op(1): [started] loading QEMU firmware config module Mar 20 21:13:42.083743 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:13:42.096919 systemd-networkd[755]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:13:42.090276 ignition[665]: op(1): [finished] loading QEMU firmware config module Mar 20 21:13:42.134669 ignition[665]: parsing config with SHA512: 9976a94d1eca7c0e194aa2ec2e05d34814ea8de1be19ff9270f732792e1e92ffaf4a1f279dd890719b67233e76eeaf72339fdfdab82720750fd70e23553f67a1 Mar 20 21:13:42.139184 unknown[665]: fetched base config from "system" Mar 20 21:13:42.139193 unknown[665]: fetched user config from "qemu" Mar 20 21:13:42.139596 ignition[665]: fetch-offline: fetch-offline passed Mar 20 21:13:42.141901 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:13:42.139682 ignition[665]: Ignition finished successfully Mar 20 21:13:42.143274 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:13:42.144993 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:13:42.169257 ignition[770]: Ignition 2.20.0 Mar 20 21:13:42.169267 ignition[770]: Stage: kargs Mar 20 21:13:42.169430 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:13:42.169441 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:13:42.170294 ignition[770]: kargs: kargs passed Mar 20 21:13:42.173951 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:13:42.170335 ignition[770]: Ignition finished successfully Mar 20 21:13:42.175971 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:13:42.197152 ignition[779]: Ignition 2.20.0 Mar 20 21:13:42.197161 ignition[779]: Stage: disks Mar 20 21:13:42.197314 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:13:42.199945 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:13:42.197323 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:13:42.201064 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:13:42.198149 ignition[779]: disks: disks passed Mar 20 21:13:42.202778 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:13:42.198191 ignition[779]: Ignition finished successfully Mar 20 21:13:42.204786 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:13:42.206641 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:13:42.208058 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:13:42.210628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:13:42.230168 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:13:42.234203 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:13:42.236216 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:13:42.297893 kernel: EXT4-fs (vda9): mounted filesystem 78c526d9-91af-4481-a769-6d3064caa829 r/w with ordered data mode. Quota mode: none. Mar 20 21:13:42.298009 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:13:42.299302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:13:42.303521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:13:42.305708 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:13:42.306763 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:13:42.306805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:13:42.306828 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:13:42.321625 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:13:42.323846 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:13:42.329447 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (798) Mar 20 21:13:42.329490 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:13:42.330722 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:13:42.331550 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:13:42.333903 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:13:42.334949 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:13:42.375910 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:13:42.380143 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:13:42.384087 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:13:42.387919 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:13:42.453108 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:13:42.455140 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:13:42.456724 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:13:42.475987 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:13:42.489639 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:13:42.506886 ignition[913]: INFO : Ignition 2.20.0 Mar 20 21:13:42.506886 ignition[913]: INFO : Stage: mount Mar 20 21:13:42.508497 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:13:42.508497 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:13:42.508497 ignition[913]: INFO : mount: mount passed Mar 20 21:13:42.508497 ignition[913]: INFO : Ignition finished successfully Mar 20 21:13:42.509633 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:13:42.512431 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:13:42.919641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:13:42.921145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:13:42.938759 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (924) Mar 20 21:13:42.938791 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:13:42.938802 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:13:42.940466 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:13:42.942893 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:13:42.943803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:13:42.963972 ignition[941]: INFO : Ignition 2.20.0 Mar 20 21:13:42.963972 ignition[941]: INFO : Stage: files Mar 20 21:13:42.965538 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:13:42.965538 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:13:42.965538 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:13:42.968983 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:13:42.968983 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:13:42.968983 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:13:42.968983 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:13:42.968983 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:13:42.968297 unknown[941]: wrote ssh authorized keys file for user: core Mar 20 21:13:42.976353 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 20 21:13:42.976353 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 20 21:13:43.038501 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 21:13:43.215457 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:13:43.217231 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:13:43.230181 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:13:43.230181 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:13:43.230181 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 21:13:43.230181 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 21:13:43.230181 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 21:13:43.230181 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 20 21:13:43.513090 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 20 21:13:43.847109 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 21:13:43.847109 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 20 21:13:43.850682 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:13:43.864667 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:13:43.868062 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:13:43.870417 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:13:43.870417 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 20 21:13:43.870417 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 21:13:43.870417 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:13:43.870417 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:13:43.870417 ignition[941]: INFO : files: files passed Mar 20 21:13:43.870417 ignition[941]: INFO : Ignition finished successfully Mar 20 21:13:43.871050 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:13:43.873749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:13:43.875805 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:13:43.888222 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:13:43.889361 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:13:43.890414 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:13:43.893246 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:13:43.893246 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:13:43.896861 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:13:43.895678 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:13:43.898290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:13:43.902000 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:13:43.939628 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:13:43.939748 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:13:43.942190 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:13:43.944039 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:13:43.945907 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:13:43.946653 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:13:43.972003 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:13:43.974750 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:13:43.998080 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:13:43.999286 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:13:44.001291 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:13:44.003106 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:13:44.003222 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:13:44.005696 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:13:44.006816 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:13:44.008658 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:13:44.010560 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:13:44.012348 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:13:44.014208 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:13:44.016053 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:13:44.018071 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:13:44.019821 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:13:44.021743 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:13:44.023286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:13:44.023430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:13:44.025694 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:13:44.027599 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:13:44.029509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:13:44.032949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:13:44.035357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:13:44.035480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:13:44.038127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:13:44.038254 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:13:44.040281 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:13:44.041909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:13:44.046968 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:13:44.048272 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:13:44.050368 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:13:44.051909 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:13:44.051997 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:13:44.053545 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:13:44.053630 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:13:44.055151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:13:44.055262 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:13:44.057027 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:13:44.057132 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:13:44.059433 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:13:44.061976 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:13:44.063148 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:13:44.063279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:13:44.065073 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:13:44.065181 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:13:44.072062 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:13:44.072143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:13:44.076990 systemd-networkd[755]: eth0: Gained IPv6LL Mar 20 21:13:44.080729 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:13:44.081846 ignition[997]: INFO : Ignition 2.20.0 Mar 20 21:13:44.081846 ignition[997]: INFO : Stage: umount Mar 20 21:13:44.081846 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:13:44.081846 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:13:44.086295 ignition[997]: INFO : umount: umount passed Mar 20 21:13:44.086295 ignition[997]: INFO : Ignition finished successfully Mar 20 21:13:44.084160 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:13:44.084250 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:13:44.085422 systemd[1]: Stopped target network.target - Network. Mar 20 21:13:44.087198 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:13:44.087260 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:13:44.088760 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:13:44.088805 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:13:44.090402 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:13:44.090446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:13:44.092078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:13:44.092119 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:13:44.093972 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:13:44.095710 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:13:44.101613 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:13:44.101716 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:13:44.104712 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:13:44.104969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:13:44.105007 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:13:44.108564 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:13:44.114678 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:13:44.114810 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:13:44.118175 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:13:44.118329 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:13:44.118370 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:13:44.120965 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:13:44.122085 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:13:44.122145 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:13:44.124099 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:13:44.124144 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:13:44.126781 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:13:44.126825 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:13:44.129037 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:13:44.132242 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:13:44.152064 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:13:44.152166 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:13:44.154153 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:13:44.154256 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:13:44.156067 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:13:44.156144 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:13:44.158654 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:13:44.158700 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:13:44.159802 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:13:44.159833 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:13:44.161620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:13:44.161668 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:13:44.164368 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:13:44.164414 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:13:44.166959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:13:44.167005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:13:44.169758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:13:44.169809 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:13:44.172354 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:13:44.173419 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:13:44.173476 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:13:44.176384 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 20 21:13:44.176457 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:13:44.178756 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:13:44.178801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:13:44.180753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:13:44.180799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:13:44.193581 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:13:44.193676 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:13:44.195950 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:13:44.198381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:13:44.206773 systemd[1]: Switching root. Mar 20 21:13:44.233070 systemd-journald[236]: Journal stopped Mar 20 21:13:44.986848 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 20 21:13:44.986928 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:13:44.986944 kernel: SELinux: policy capability open_perms=1 Mar 20 21:13:44.986954 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:13:44.986963 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:13:44.986972 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:13:44.986982 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:13:44.986991 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:13:44.987000 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:13:44.987012 kernel: audit: type=1403 audit(1742505224.379:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:13:44.987024 systemd[1]: Successfully loaded SELinux policy in 34.049ms. Mar 20 21:13:44.987040 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.182ms. Mar 20 21:13:44.987052 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:13:44.987062 systemd[1]: Detected virtualization kvm. Mar 20 21:13:44.987072 systemd[1]: Detected architecture arm64. Mar 20 21:13:44.987103 systemd[1]: Detected first boot. Mar 20 21:13:44.987113 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:13:44.987123 zram_generator::config[1043]: No configuration found. Mar 20 21:13:44.987135 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:13:44.987146 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:13:44.987157 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:13:44.987167 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:13:44.987177 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:13:44.987187 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:13:44.987198 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:13:44.987208 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:13:44.987218 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:13:44.987238 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:13:44.987249 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:13:44.987260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:13:44.987271 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:13:44.987281 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:13:44.987292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:13:44.987302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:13:44.987313 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:13:44.987324 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:13:44.987342 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:13:44.987353 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:13:44.987364 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 20 21:13:44.987374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:13:44.987385 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:13:44.987395 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:13:44.987405 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:13:44.987417 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:13:44.987427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:13:44.987437 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:13:44.987447 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:13:44.987457 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:13:44.987468 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:13:44.987478 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:13:44.987493 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:13:44.987505 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:13:44.987515 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:13:44.987526 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:13:44.987549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:13:44.987560 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:13:44.987570 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:13:44.987580 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:13:44.987600 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:13:44.987610 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:13:44.987624 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:13:44.987634 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:13:44.987646 systemd[1]: Reached target machines.target - Containers. Mar 20 21:13:44.987656 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:13:44.987666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:13:44.987677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:13:44.987688 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:13:44.987698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:13:44.987708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:13:44.987719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:13:44.987735 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:13:44.987745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:13:44.987756 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:13:44.987766 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:13:44.987777 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:13:44.987786 kernel: fuse: init (API version 7.39) Mar 20 21:13:44.987796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:13:44.987806 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:13:44.987818 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:13:44.987828 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:13:44.987838 kernel: loop: module loaded Mar 20 21:13:44.987848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:13:44.987858 kernel: ACPI: bus type drm_connector registered Mar 20 21:13:44.987868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:13:44.987885 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:13:44.987897 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:13:44.987907 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:13:44.987920 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:13:44.987930 systemd[1]: Stopped verity-setup.service. Mar 20 21:13:44.987960 systemd-journald[1116]: Collecting audit messages is disabled. Mar 20 21:13:44.987982 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:13:44.987995 systemd-journald[1116]: Journal started Mar 20 21:13:44.988016 systemd-journald[1116]: Runtime Journal (/run/log/journal/1669779a7d1f46a1aa932a2054672a62) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:13:44.764385 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:13:44.781755 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:13:44.782129 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:13:44.990079 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:13:44.990702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:13:44.991938 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:13:44.992968 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:13:44.994112 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:13:44.995261 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:13:44.996514 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:13:44.997946 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:13:44.999374 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:13:44.999545 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:13:45.000989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:13:45.001157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:13:45.002606 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:13:45.002766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:13:45.004071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:13:45.004235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:13:45.005649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:13:45.005803 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:13:45.007247 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:13:45.007426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:13:45.008744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:13:45.010114 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:13:45.011631 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:13:45.013149 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:13:45.025346 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:13:45.027686 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:13:45.029738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:13:45.030929 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:13:45.030959 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:13:45.032772 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:13:45.040590 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:13:45.042602 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:13:45.043870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:13:45.045148 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:13:45.047107 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:13:45.048282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:13:45.051981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:13:45.053065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:13:45.053851 systemd-journald[1116]: Time spent on flushing to /var/log/journal/1669779a7d1f46a1aa932a2054672a62 is 20.479ms for 865 entries. Mar 20 21:13:45.053851 systemd-journald[1116]: System Journal (/var/log/journal/1669779a7d1f46a1aa932a2054672a62) is 8M, max 195.6M, 187.6M free. Mar 20 21:13:45.084158 systemd-journald[1116]: Received client request to flush runtime journal. Mar 20 21:13:45.084198 kernel: loop0: detected capacity change from 0 to 201592 Mar 20 21:13:45.054161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:13:45.057675 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:13:45.059993 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:13:45.065855 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:13:45.067325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:13:45.068707 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:13:45.070260 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:13:45.079530 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:13:45.081064 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:13:45.083368 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:13:45.086060 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:13:45.092364 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:13:45.094000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:13:45.103004 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:13:45.109456 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Mar 20 21:13:45.109816 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Mar 20 21:13:45.110681 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:13:45.111581 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:13:45.113916 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 20 21:13:45.116994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:13:45.119791 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:13:45.141932 kernel: loop1: detected capacity change from 0 to 103832 Mar 20 21:13:45.153399 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:13:45.155962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:13:45.174155 kernel: loop2: detected capacity change from 0 to 126448 Mar 20 21:13:45.176216 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 20 21:13:45.176234 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 20 21:13:45.180933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:13:45.207904 kernel: loop3: detected capacity change from 0 to 201592 Mar 20 21:13:45.213897 kernel: loop4: detected capacity change from 0 to 103832 Mar 20 21:13:45.218928 kernel: loop5: detected capacity change from 0 to 126448 Mar 20 21:13:45.221674 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:13:45.222403 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 20 21:13:45.225384 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:13:45.225398 systemd[1]: Reloading... Mar 20 21:13:45.268905 zram_generator::config[1220]: No configuration found. Mar 20 21:13:45.336207 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:13:45.378149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:13:45.428202 systemd[1]: Reloading finished in 202 ms. Mar 20 21:13:45.448578 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:13:45.451905 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:13:45.464201 systemd[1]: Starting ensure-sysext.service... Mar 20 21:13:45.466077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:13:45.478309 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:13:45.478323 systemd[1]: Reloading... Mar 20 21:13:45.484264 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:13:45.484764 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:13:45.485491 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:13:45.485790 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 20 21:13:45.485932 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 20 21:13:45.488442 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:13:45.488538 systemd-tmpfiles[1255]: Skipping /boot Mar 20 21:13:45.496992 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:13:45.497095 systemd-tmpfiles[1255]: Skipping /boot Mar 20 21:13:45.520902 zram_generator::config[1280]: No configuration found. Mar 20 21:13:45.610276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:13:45.660290 systemd[1]: Reloading finished in 181 ms. Mar 20 21:13:45.671520 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:13:45.678977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:13:45.687761 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:13:45.690177 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:13:45.695676 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:13:45.698830 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:13:45.703016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:13:45.706432 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:13:45.723257 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:13:45.732384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:13:45.735452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:13:45.738307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:13:45.742577 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:13:45.745197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:13:45.745332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:13:45.750585 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Mar 20 21:13:45.751643 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:13:45.756432 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:13:45.759231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:13:45.761119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:13:45.761304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:13:45.764405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:13:45.764572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:13:45.772243 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:13:45.772427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:13:45.774184 augenrules[1351]: No rules Mar 20 21:13:45.774310 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:13:45.775996 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:13:45.778050 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:13:45.778259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:13:45.779941 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:13:45.803104 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:13:45.804161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:13:45.806305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:13:45.816057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:13:45.819015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:13:45.821467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:13:45.824928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:13:45.825048 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:13:45.827882 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:13:45.828944 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:13:45.830223 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:13:45.833571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:13:45.833740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:13:45.836483 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:13:45.836638 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:13:45.836892 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1377) Mar 20 21:13:45.843111 systemd[1]: Finished ensure-sysext.service. Mar 20 21:13:45.859151 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 20 21:13:45.864737 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:13:45.865246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:13:45.867455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:13:45.867609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:13:45.870037 augenrules[1382]: /sbin/augenrules: No change Mar 20 21:13:45.882189 augenrules[1421]: No rules Mar 20 21:13:45.883561 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:13:45.883859 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:13:45.885691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:13:45.885763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:13:45.889702 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:13:45.900450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:13:45.903393 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:13:45.926082 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:13:45.954436 systemd-networkd[1394]: lo: Link UP Mar 20 21:13:45.954444 systemd-networkd[1394]: lo: Gained carrier Mar 20 21:13:45.958563 systemd-networkd[1394]: Enumeration completed Mar 20 21:13:45.958686 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:13:45.962463 systemd-resolved[1324]: Positive Trust Anchors: Mar 20 21:13:45.965807 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:13:45.965840 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:13:45.965996 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:13:45.966000 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:13:45.966960 systemd-networkd[1394]: eth0: Link UP Mar 20 21:13:45.966970 systemd-networkd[1394]: eth0: Gained carrier Mar 20 21:13:45.966984 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:13:45.969073 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:13:45.972677 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:13:45.973661 systemd-resolved[1324]: Defaulting to hostname 'linux'. Mar 20 21:13:45.989527 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:13:45.990971 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:13:45.990978 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:13:45.991600 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Mar 20 21:13:46.430755 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:13:46.430797 systemd-timesyncd[1427]: Initial clock synchronization to Thu 2025-03-20 21:13:46.430684 UTC. Mar 20 21:13:46.430977 systemd-resolved[1324]: Clock change detected. Flushing caches. Mar 20 21:13:46.433280 systemd[1]: Reached target network.target - Network. Mar 20 21:13:46.434302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:13:46.435554 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:13:46.438079 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:13:46.442105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:13:46.459404 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:13:46.462182 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:13:46.482291 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:13:46.497524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:13:46.517531 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:13:46.519100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:13:46.520270 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:13:46.521599 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:13:46.522874 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:13:46.524255 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:13:46.525449 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:13:46.526856 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:13:46.528174 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:13:46.528208 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:13:46.529106 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:13:46.530894 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:13:46.533315 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:13:46.536438 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:13:46.537850 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:13:46.539144 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:13:46.542158 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:13:46.543538 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:13:46.545687 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:13:46.547281 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:13:46.548400 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:13:46.549320 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:13:46.550283 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:13:46.550313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:13:46.551192 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:13:46.553127 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:13:46.556086 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:13:46.555897 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:13:46.557863 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:13:46.562140 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:13:46.563123 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:13:46.565533 jq[1452]: false Mar 20 21:13:46.565632 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 21:13:46.567738 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:13:46.570253 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:13:46.574542 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:13:46.578145 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:13:46.578644 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:13:46.580493 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:13:46.583266 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:13:46.585237 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:13:46.589473 extend-filesystems[1453]: Found loop3 Mar 20 21:13:46.589473 extend-filesystems[1453]: Found loop4 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found loop5 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda1 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda2 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda3 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found usr Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda4 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda6 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda7 Mar 20 21:13:46.592786 extend-filesystems[1453]: Found vda9 Mar 20 21:13:46.592786 extend-filesystems[1453]: Checking size of /dev/vda9 Mar 20 21:13:46.591288 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:13:46.591916 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:13:46.592227 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:13:46.596120 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:13:46.607636 dbus-daemon[1451]: [system] SELinux support is enabled Mar 20 21:13:46.613583 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:13:46.614879 extend-filesystems[1453]: Resized partition /dev/vda9 Mar 20 21:13:46.622270 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:13:46.622387 extend-filesystems[1482]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:13:46.616620 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:13:46.623823 jq[1464]: true Mar 20 21:13:46.618438 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:13:46.620101 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:13:46.640217 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:13:46.640247 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:13:46.641691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:13:46.641726 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:13:46.647078 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1363) Mar 20 21:13:46.647331 tar[1473]: linux-arm64/LICENSE Mar 20 21:13:46.655070 tar[1473]: linux-arm64/helm Mar 20 21:13:46.658835 jq[1484]: true Mar 20 21:13:46.670821 update_engine[1462]: I20250320 21:13:46.670548 1462 main.cc:92] Flatcar Update Engine starting Mar 20 21:13:46.674628 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:13:46.681299 update_engine[1462]: I20250320 21:13:46.676874 1462 update_check_scheduler.cc:74] Next update check in 8m53s Mar 20 21:13:46.678950 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:13:46.682435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:13:46.688722 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Mar 20 21:13:46.690500 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:13:46.690500 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:13:46.690500 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:13:46.689713 systemd-logind[1460]: New seat seat0. Mar 20 21:13:46.701816 extend-filesystems[1453]: Resized filesystem in /dev/vda9 Mar 20 21:13:46.690101 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:13:46.691756 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:13:46.695359 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:13:46.728591 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:13:46.732112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:13:46.733918 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:13:46.751223 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:13:46.866723 containerd[1474]: time="2025-03-20T21:13:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:13:46.868948 containerd[1474]: time="2025-03-20T21:13:46.867716527Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:13:46.882420 containerd[1474]: time="2025-03-20T21:13:46.882345887Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="808.56µs" Mar 20 21:13:46.882420 containerd[1474]: time="2025-03-20T21:13:46.882399607Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:13:46.882498 containerd[1474]: time="2025-03-20T21:13:46.882425647Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:13:46.882610 containerd[1474]: time="2025-03-20T21:13:46.882586247Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:13:46.882658 containerd[1474]: time="2025-03-20T21:13:46.882612407Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:13:46.882658 containerd[1474]: time="2025-03-20T21:13:46.882640967Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:13:46.882749 containerd[1474]: time="2025-03-20T21:13:46.882692127Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:13:46.882749 containerd[1474]: time="2025-03-20T21:13:46.882707727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:13:46.883129 containerd[1474]: time="2025-03-20T21:13:46.883029327Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:13:46.883404 containerd[1474]: time="2025-03-20T21:13:46.883378927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:13:46.883490 containerd[1474]: time="2025-03-20T21:13:46.883468607Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:13:46.883622 containerd[1474]: time="2025-03-20T21:13:46.883604087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:13:46.883820 containerd[1474]: time="2025-03-20T21:13:46.883799927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:13:46.884218 containerd[1474]: time="2025-03-20T21:13:46.884194407Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:13:46.884359 containerd[1474]: time="2025-03-20T21:13:46.884340927Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:13:46.884435 containerd[1474]: time="2025-03-20T21:13:46.884413687Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:13:46.885958 containerd[1474]: time="2025-03-20T21:13:46.885796567Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:13:46.886187 containerd[1474]: time="2025-03-20T21:13:46.886168527Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:13:46.886319 containerd[1474]: time="2025-03-20T21:13:46.886300807Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:13:46.889608 containerd[1474]: time="2025-03-20T21:13:46.889575967Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889696687Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889715807Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889729447Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889741607Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889752167Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889765647Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889778247Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889792127Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889804167Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889813927Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889836807Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889957727Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889978167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:13:46.891838 containerd[1474]: time="2025-03-20T21:13:46.889990047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890002207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890017447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890029687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890062807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890077127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890090047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890100167Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890111167Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890418527Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890433127Z" level=info msg="Start snapshots syncer" Mar 20 21:13:46.892132 containerd[1474]: time="2025-03-20T21:13:46.890454567Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:13:46.892308 containerd[1474]: time="2025-03-20T21:13:46.890678847Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:13:46.892308 containerd[1474]: time="2025-03-20T21:13:46.890730007Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890801127Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890904527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890925367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890937567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890950087Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890961967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890973567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.890991647Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.891029047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.891062167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.891074727Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.891103887Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.891117087Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:13:46.892405 containerd[1474]: time="2025-03-20T21:13:46.891125367Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891140327Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891148847Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891158607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891168807Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891244727Z" level=info msg="runtime interface created" Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891249767Z" level=info msg="created NRI interface" Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891257607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891269567Z" level=info msg="Connect containerd service" Mar 20 21:13:46.892635 containerd[1474]: time="2025-03-20T21:13:46.891297007Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:13:46.893323 containerd[1474]: time="2025-03-20T21:13:46.893291527Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:13:47.024007 containerd[1474]: time="2025-03-20T21:13:47.023933887Z" level=info msg="Start subscribing containerd event" Mar 20 21:13:47.024007 containerd[1474]: time="2025-03-20T21:13:47.024014567Z" level=info msg="Start recovering state" Mar 20 21:13:47.024348 containerd[1474]: time="2025-03-20T21:13:47.023959207Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:13:47.024348 containerd[1474]: time="2025-03-20T21:13:47.024269647Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:13:47.024348 containerd[1474]: time="2025-03-20T21:13:47.024296727Z" level=info msg="Start event monitor" Mar 20 21:13:47.024348 containerd[1474]: time="2025-03-20T21:13:47.024320487Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:13:47.024348 containerd[1474]: time="2025-03-20T21:13:47.024332207Z" level=info msg="Start streaming server" Mar 20 21:13:47.024348 containerd[1474]: time="2025-03-20T21:13:47.024350247Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:13:47.024471 containerd[1474]: time="2025-03-20T21:13:47.024361487Z" level=info msg="runtime interface starting up..." Mar 20 21:13:47.024471 containerd[1474]: time="2025-03-20T21:13:47.024368127Z" level=info msg="starting plugins..." Mar 20 21:13:47.024471 containerd[1474]: time="2025-03-20T21:13:47.024387807Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:13:47.027327 containerd[1474]: time="2025-03-20T21:13:47.024515527Z" level=info msg="containerd successfully booted in 0.158212s" Mar 20 21:13:47.024625 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:13:47.065058 tar[1473]: linux-arm64/README.md Mar 20 21:13:47.082618 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 21:13:47.371269 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:13:47.389611 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:13:47.392240 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:13:47.408266 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:13:47.410094 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:13:47.412421 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:13:47.435571 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:13:47.438078 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:13:47.439949 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 20 21:13:47.441333 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:13:47.584177 systemd-networkd[1394]: eth0: Gained IPv6LL Mar 20 21:13:47.586169 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:13:47.587767 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:13:47.589918 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:13:47.592153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:13:47.605350 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:13:47.617896 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:13:47.618094 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:13:47.621130 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:13:47.627889 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:13:48.113650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:13:48.115352 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:13:48.116922 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:13:48.120128 systemd[1]: Startup finished in 549ms (kernel) + 4.681s (initrd) + 3.336s (userspace) = 8.568s. Mar 20 21:13:48.506987 kubelet[1577]: E0320 21:13:48.506867 1577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:13:48.509461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:13:48.509627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:13:48.509958 systemd[1]: kubelet.service: Consumed 771ms CPU time, 251.1M memory peak. Mar 20 21:13:53.050438 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:13:53.051539 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:35214.service - OpenSSH per-connection server daemon (10.0.0.1:35214). Mar 20 21:13:53.110586 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 35214 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:13:53.114120 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:13:53.122120 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:13:53.123272 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:13:53.128361 systemd-logind[1460]: New session 1 of user core. Mar 20 21:13:53.151231 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:13:53.154744 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:13:53.173872 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:13:53.175776 systemd-logind[1460]: New session c1 of user core. Mar 20 21:13:53.279953 systemd[1595]: Queued start job for default target default.target. Mar 20 21:13:53.290852 systemd[1595]: Created slice app.slice - User Application Slice. Mar 20 21:13:53.290881 systemd[1595]: Reached target paths.target - Paths. Mar 20 21:13:53.290915 systemd[1595]: Reached target timers.target - Timers. Mar 20 21:13:53.291991 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:13:53.299844 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:13:53.299899 systemd[1595]: Reached target sockets.target - Sockets. Mar 20 21:13:53.299932 systemd[1595]: Reached target basic.target - Basic System. Mar 20 21:13:53.299959 systemd[1595]: Reached target default.target - Main User Target. Mar 20 21:13:53.299982 systemd[1595]: Startup finished in 119ms. Mar 20 21:13:53.300115 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:13:53.301451 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:13:53.368212 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:35224.service - OpenSSH per-connection server daemon (10.0.0.1:35224). Mar 20 21:13:53.413187 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 35224 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:13:53.414357 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:13:53.418521 systemd-logind[1460]: New session 2 of user core. Mar 20 21:13:53.429238 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:13:53.479642 sshd[1608]: Connection closed by 10.0.0.1 port 35224 Mar 20 21:13:53.480083 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Mar 20 21:13:53.495000 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:35224.service: Deactivated successfully. Mar 20 21:13:53.496408 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:13:53.497632 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:13:53.498694 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:35226.service - OpenSSH per-connection server daemon (10.0.0.1:35226). Mar 20 21:13:53.499419 systemd-logind[1460]: Removed session 2. Mar 20 21:13:53.543018 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 35226 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:13:53.544102 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:13:53.547984 systemd-logind[1460]: New session 3 of user core. Mar 20 21:13:53.566244 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:13:53.613565 sshd[1616]: Connection closed by 10.0.0.1 port 35226 Mar 20 21:13:53.613895 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 20 21:13:53.623992 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:35226.service: Deactivated successfully. Mar 20 21:13:53.626255 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:13:53.627433 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:13:53.628464 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:35234.service - OpenSSH per-connection server daemon (10.0.0.1:35234). Mar 20 21:13:53.629147 systemd-logind[1460]: Removed session 3. Mar 20 21:13:53.678855 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 35234 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:13:53.679872 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:13:53.683628 systemd-logind[1460]: New session 4 of user core. Mar 20 21:13:53.693162 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:13:53.742403 sshd[1624]: Connection closed by 10.0.0.1 port 35234 Mar 20 21:13:53.742697 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Mar 20 21:13:53.756940 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:35234.service: Deactivated successfully. Mar 20 21:13:53.758338 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:13:53.760206 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:13:53.760600 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:35236.service - OpenSSH per-connection server daemon (10.0.0.1:35236). Mar 20 21:13:53.761290 systemd-logind[1460]: Removed session 4. Mar 20 21:13:53.804378 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 35236 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:13:53.805410 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:13:53.809104 systemd-logind[1460]: New session 5 of user core. Mar 20 21:13:53.824169 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:13:53.883858 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:13:53.884133 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:13:53.897870 sudo[1633]: pam_unix(sudo:session): session closed for user root Mar 20 21:13:53.900994 sshd[1632]: Connection closed by 10.0.0.1 port 35236 Mar 20 21:13:53.901318 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Mar 20 21:13:53.914940 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:35236.service: Deactivated successfully. Mar 20 21:13:53.916404 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:13:53.918185 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:13:53.918731 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:35246.service - OpenSSH per-connection server daemon (10.0.0.1:35246). Mar 20 21:13:53.919818 systemd-logind[1460]: Removed session 5. Mar 20 21:13:53.977291 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 35246 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:13:53.978379 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:13:53.982203 systemd-logind[1460]: New session 6 of user core. Mar 20 21:13:53.993166 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:13:54.043306 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:13:54.043557 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:13:54.046417 sudo[1643]: pam_unix(sudo:session): session closed for user root Mar 20 21:13:54.050512 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:13:54.050971 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:13:54.058236 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:13:54.098637 augenrules[1665]: No rules Mar 20 21:13:54.099625 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:13:54.099814 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:13:54.100742 sudo[1642]: pam_unix(sudo:session): session closed for user root Mar 20 21:13:54.101705 sshd[1641]: Connection closed by 10.0.0.1 port 35246 Mar 20 21:13:54.102117 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Mar 20 21:13:54.114027 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:35246.service: Deactivated successfully. Mar 20 21:13:54.115568 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:13:54.116833 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:13:54.118147 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:35258.service - OpenSSH per-connection server daemon (10.0.0.1:35258). Mar 20 21:13:54.119009 systemd-logind[1460]: Removed session 6. Mar 20 21:13:54.162955 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 35258 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:13:54.164088 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:13:54.168105 systemd-logind[1460]: New session 7 of user core. Mar 20 21:13:54.181173 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:13:54.230805 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:13:54.231091 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:13:54.578122 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 21:13:54.596331 (dockerd)[1698]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 21:13:54.841206 dockerd[1698]: time="2025-03-20T21:13:54.841085807Z" level=info msg="Starting up" Mar 20 21:13:54.843142 dockerd[1698]: time="2025-03-20T21:13:54.843107327Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 21:13:54.946925 dockerd[1698]: time="2025-03-20T21:13:54.946766047Z" level=info msg="Loading containers: start." Mar 20 21:13:55.081072 kernel: Initializing XFRM netlink socket Mar 20 21:13:55.137854 systemd-networkd[1394]: docker0: Link UP Mar 20 21:13:55.211235 dockerd[1698]: time="2025-03-20T21:13:55.211177927Z" level=info msg="Loading containers: done." Mar 20 21:13:55.225610 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck422845160-merged.mount: Deactivated successfully. Mar 20 21:13:55.227188 dockerd[1698]: time="2025-03-20T21:13:55.226767887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 21:13:55.227188 dockerd[1698]: time="2025-03-20T21:13:55.226844167Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 21:13:55.227188 dockerd[1698]: time="2025-03-20T21:13:55.227012647Z" level=info msg="Daemon has completed initialization" Mar 20 21:13:55.254118 dockerd[1698]: time="2025-03-20T21:13:55.254065327Z" level=info msg="API listen on /run/docker.sock" Mar 20 21:13:55.254205 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 21:13:55.954641 containerd[1474]: time="2025-03-20T21:13:55.954588327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 20 21:13:56.672278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264545363.mount: Deactivated successfully. Mar 20 21:13:58.202722 containerd[1474]: time="2025-03-20T21:13:58.202662367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:13:58.203580 containerd[1474]: time="2025-03-20T21:13:58.203315967Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231952" Mar 20 21:13:58.204277 containerd[1474]: time="2025-03-20T21:13:58.204238167Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:13:58.207145 containerd[1474]: time="2025-03-20T21:13:58.207084087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:13:58.208181 containerd[1474]: time="2025-03-20T21:13:58.208135527Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 2.25350236s" Mar 20 21:13:58.208224 containerd[1474]: time="2025-03-20T21:13:58.208186807Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 20 21:13:58.208832 containerd[1474]: time="2025-03-20T21:13:58.208811207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 20 21:13:58.760038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 21:13:58.761380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:13:58.874870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:13:58.878521 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:13:58.918785 kubelet[1965]: E0320 21:13:58.918736 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:13:58.922006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:13:58.922173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:13:58.922618 systemd[1]: kubelet.service: Consumed 141ms CPU time, 103.5M memory peak. Mar 20 21:13:59.960460 containerd[1474]: time="2025-03-20T21:13:59.960398767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:13:59.961483 containerd[1474]: time="2025-03-20T21:13:59.961425887Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530034" Mar 20 21:13:59.962472 containerd[1474]: time="2025-03-20T21:13:59.962435047Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:13:59.964766 containerd[1474]: time="2025-03-20T21:13:59.964734407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:13:59.965826 containerd[1474]: time="2025-03-20T21:13:59.965799167Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.7569354s" Mar 20 21:13:59.965885 containerd[1474]: time="2025-03-20T21:13:59.965831847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 20 21:13:59.966250 containerd[1474]: time="2025-03-20T21:13:59.966220927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 20 21:14:01.306629 containerd[1474]: time="2025-03-20T21:14:01.306567807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:01.307163 containerd[1474]: time="2025-03-20T21:14:01.307115527Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482563" Mar 20 21:14:01.307910 containerd[1474]: time="2025-03-20T21:14:01.307886087Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:01.310830 containerd[1474]: time="2025-03-20T21:14:01.310782527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:01.311496 containerd[1474]: time="2025-03-20T21:14:01.311359087Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.34503768s" Mar 20 21:14:01.311496 containerd[1474]: time="2025-03-20T21:14:01.311392007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 20 21:14:01.311951 containerd[1474]: time="2025-03-20T21:14:01.311924687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 20 21:14:02.572346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70791816.mount: Deactivated successfully. Mar 20 21:14:02.787284 containerd[1474]: time="2025-03-20T21:14:02.787228167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:02.788318 containerd[1474]: time="2025-03-20T21:14:02.788260407Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370097" Mar 20 21:14:02.788956 containerd[1474]: time="2025-03-20T21:14:02.788923247Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:02.791137 containerd[1474]: time="2025-03-20T21:14:02.791092247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:02.792115 containerd[1474]: time="2025-03-20T21:14:02.791990887Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.48003556s" Mar 20 21:14:02.792115 containerd[1474]: time="2025-03-20T21:14:02.792020727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 20 21:14:02.792581 containerd[1474]: time="2025-03-20T21:14:02.792535687Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 20 21:14:03.378471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104828588.mount: Deactivated successfully. Mar 20 21:14:04.586828 containerd[1474]: time="2025-03-20T21:14:04.586783047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:04.587856 containerd[1474]: time="2025-03-20T21:14:04.587234687Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Mar 20 21:14:04.588922 containerd[1474]: time="2025-03-20T21:14:04.588283967Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:04.590897 containerd[1474]: time="2025-03-20T21:14:04.590831687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:04.591955 containerd[1474]: time="2025-03-20T21:14:04.591904727Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.7993292s" Mar 20 21:14:04.591955 containerd[1474]: time="2025-03-20T21:14:04.591935687Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 20 21:14:04.592697 containerd[1474]: time="2025-03-20T21:14:04.592503487Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 21:14:05.057799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247707453.mount: Deactivated successfully. Mar 20 21:14:05.063495 containerd[1474]: time="2025-03-20T21:14:05.063448687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:14:05.064228 containerd[1474]: time="2025-03-20T21:14:05.064180807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 20 21:14:05.064827 containerd[1474]: time="2025-03-20T21:14:05.064795447Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:14:05.066762 containerd[1474]: time="2025-03-20T21:14:05.066729287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:14:05.067673 containerd[1474]: time="2025-03-20T21:14:05.067635927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.10716ms" Mar 20 21:14:05.067673 containerd[1474]: time="2025-03-20T21:14:05.067669127Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 20 21:14:05.068286 containerd[1474]: time="2025-03-20T21:14:05.068140407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 20 21:14:05.691824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052370528.mount: Deactivated successfully. Mar 20 21:14:08.448063 containerd[1474]: time="2025-03-20T21:14:08.447996087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:08.448980 containerd[1474]: time="2025-03-20T21:14:08.448456247Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Mar 20 21:14:08.449580 containerd[1474]: time="2025-03-20T21:14:08.449528527Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:08.452322 containerd[1474]: time="2025-03-20T21:14:08.452294447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:08.453475 containerd[1474]: time="2025-03-20T21:14:08.453447367Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.3852698s" Mar 20 21:14:08.453518 containerd[1474]: time="2025-03-20T21:14:08.453481127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 20 21:14:09.172579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 21:14:09.174280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:14:09.290739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:14:09.293850 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:14:09.327540 kubelet[2131]: E0320 21:14:09.327487 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:14:09.329997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:14:09.330182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:14:09.330463 systemd[1]: kubelet.service: Consumed 128ms CPU time, 102.7M memory peak. Mar 20 21:14:14.828729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:14:14.828958 systemd[1]: kubelet.service: Consumed 128ms CPU time, 102.7M memory peak. Mar 20 21:14:14.830875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:14:14.849790 systemd[1]: Reload requested from client PID 2147 ('systemctl') (unit session-7.scope)... Mar 20 21:14:14.849913 systemd[1]: Reloading... Mar 20 21:14:14.921065 zram_generator::config[2191]: No configuration found. Mar 20 21:14:15.045312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:14:15.116347 systemd[1]: Reloading finished in 266 ms. Mar 20 21:14:15.168293 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 20 21:14:15.168362 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 20 21:14:15.168573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:14:15.168621 systemd[1]: kubelet.service: Consumed 83ms CPU time, 90.2M memory peak. Mar 20 21:14:15.170746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:14:15.281062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:14:15.284758 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:14:15.319214 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:14:15.319214 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 20 21:14:15.319214 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:14:15.320086 kubelet[2237]: I0320 21:14:15.319549 2237 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:14:16.972191 kubelet[2237]: I0320 21:14:16.971757 2237 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 20 21:14:16.972191 kubelet[2237]: I0320 21:14:16.971790 2237 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:14:16.972601 kubelet[2237]: I0320 21:14:16.972310 2237 server.go:954] "Client rotation is on, will bootstrap in background" Mar 20 21:14:17.026574 kubelet[2237]: I0320 21:14:17.026544 2237 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:14:17.028691 kubelet[2237]: E0320 21:14:17.028666 2237 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:14:17.034962 kubelet[2237]: I0320 21:14:17.034939 2237 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:14:17.040059 kubelet[2237]: I0320 21:14:17.039970 2237 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:14:17.042048 kubelet[2237]: I0320 21:14:17.041998 2237 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:14:17.042212 kubelet[2237]: I0320 21:14:17.042050 2237 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:14:17.042296 kubelet[2237]: I0320 21:14:17.042279 2237 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:14:17.042296 kubelet[2237]: I0320 21:14:17.042288 2237 container_manager_linux.go:304] "Creating device plugin manager" Mar 20 21:14:17.042492 kubelet[2237]: I0320 21:14:17.042466 2237 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:14:17.045393 kubelet[2237]: I0320 21:14:17.045367 2237 kubelet.go:446] "Attempting to sync node with API server" Mar 20 21:14:17.045393 kubelet[2237]: I0320 21:14:17.045388 2237 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:14:17.045449 kubelet[2237]: I0320 21:14:17.045411 2237 kubelet.go:352] "Adding apiserver pod source" Mar 20 21:14:17.045449 kubelet[2237]: I0320 21:14:17.045421 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:14:17.047231 kubelet[2237]: W0320 21:14:17.047136 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 20 21:14:17.047231 kubelet[2237]: E0320 21:14:17.047188 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:14:17.048440 kubelet[2237]: W0320 21:14:17.048348 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 20 21:14:17.048440 kubelet[2237]: E0320 21:14:17.048407 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:14:17.049181 kubelet[2237]: I0320 21:14:17.048684 2237 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:14:17.050539 kubelet[2237]: I0320 21:14:17.049521 2237 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:14:17.050539 kubelet[2237]: W0320 21:14:17.049708 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:14:17.051231 kubelet[2237]: I0320 21:14:17.051191 2237 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 20 21:14:17.051231 kubelet[2237]: I0320 21:14:17.051233 2237 server.go:1287] "Started kubelet" Mar 20 21:14:17.051864 kubelet[2237]: I0320 21:14:17.051837 2237 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:14:17.052203 kubelet[2237]: I0320 21:14:17.052169 2237 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:14:17.052454 kubelet[2237]: I0320 21:14:17.052434 2237 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:14:17.054587 kubelet[2237]: I0320 21:14:17.054264 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:14:17.055649 kubelet[2237]: I0320 21:14:17.055613 2237 server.go:490] "Adding debug handlers to kubelet server" Mar 20 21:14:17.056589 kubelet[2237]: I0320 21:14:17.056566 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:14:17.057696 kubelet[2237]: E0320 21:14:17.056870 2237 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:14:17.057696 kubelet[2237]: I0320 21:14:17.056904 2237 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 20 21:14:17.057696 kubelet[2237]: I0320 21:14:17.057078 2237 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:14:17.057696 kubelet[2237]: I0320 21:14:17.057129 2237 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:14:17.057696 kubelet[2237]: E0320 21:14:17.057069 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e9f4c91c9cd47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:14:17.051213127 +0000 UTC m=+1.763614921,LastTimestamp:2025-03-20 21:14:17.051213127 +0000 UTC m=+1.763614921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:14:17.057696 kubelet[2237]: W0320 21:14:17.057388 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 20 21:14:17.057696 kubelet[2237]: E0320 21:14:17.057425 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:14:17.058095 kubelet[2237]: E0320 21:14:17.057508 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Mar 20 21:14:17.058095 kubelet[2237]: E0320 21:14:17.057856 2237 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:14:17.058095 kubelet[2237]: I0320 21:14:17.058011 2237 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:14:17.059309 kubelet[2237]: I0320 21:14:17.059291 2237 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:14:17.059309 kubelet[2237]: I0320 21:14:17.059308 2237 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:14:17.068099 kubelet[2237]: I0320 21:14:17.067999 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:14:17.069182 kubelet[2237]: I0320 21:14:17.068919 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:14:17.069182 kubelet[2237]: I0320 21:14:17.068943 2237 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 20 21:14:17.069182 kubelet[2237]: I0320 21:14:17.068974 2237 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 20 21:14:17.069182 kubelet[2237]: I0320 21:14:17.068981 2237 kubelet.go:2388] "Starting kubelet main sync loop" Mar 20 21:14:17.069182 kubelet[2237]: E0320 21:14:17.069018 2237 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:14:17.072297 kubelet[2237]: I0320 21:14:17.072263 2237 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 20 21:14:17.072297 kubelet[2237]: I0320 21:14:17.072282 2237 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 20 21:14:17.072297 kubelet[2237]: I0320 21:14:17.072300 2237 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:14:17.072541 kubelet[2237]: W0320 21:14:17.072487 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 20 21:14:17.072569 kubelet[2237]: E0320 21:14:17.072541 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:14:17.074214 kubelet[2237]: I0320 21:14:17.074188 2237 policy_none.go:49] "None policy: Start" Mar 20 21:14:17.074214 kubelet[2237]: I0320 21:14:17.074211 2237 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 20 21:14:17.074296 kubelet[2237]: I0320 21:14:17.074230 2237 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:14:17.079101 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:14:17.093663 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:14:17.096473 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:14:17.106748 kubelet[2237]: I0320 21:14:17.106709 2237 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:14:17.106910 kubelet[2237]: I0320 21:14:17.106887 2237 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:14:17.106949 kubelet[2237]: I0320 21:14:17.106904 2237 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:14:17.107153 kubelet[2237]: I0320 21:14:17.107136 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:14:17.108268 kubelet[2237]: E0320 21:14:17.108243 2237 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 20 21:14:17.108334 kubelet[2237]: E0320 21:14:17.108288 2237 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 21:14:17.177035 systemd[1]: Created slice kubepods-burstable-pod19b8b87be66b79b451437e8a71c7897e.slice - libcontainer container kubepods-burstable-pod19b8b87be66b79b451437e8a71c7897e.slice. Mar 20 21:14:17.187912 kubelet[2237]: E0320 21:14:17.187703 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:17.189164 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 20 21:14:17.190836 kubelet[2237]: E0320 21:14:17.190815 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:17.192944 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 20 21:14:17.194190 kubelet[2237]: E0320 21:14:17.194169 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:17.207949 kubelet[2237]: I0320 21:14:17.207932 2237 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:14:17.208374 kubelet[2237]: E0320 21:14:17.208344 2237 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 20 21:14:17.259459 kubelet[2237]: I0320 21:14:17.258653 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:14:17.259459 kubelet[2237]: I0320 21:14:17.258702 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19b8b87be66b79b451437e8a71c7897e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19b8b87be66b79b451437e8a71c7897e\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:17.259459 kubelet[2237]: I0320 21:14:17.258729 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19b8b87be66b79b451437e8a71c7897e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19b8b87be66b79b451437e8a71c7897e\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:17.259459 kubelet[2237]: E0320 21:14:17.258734 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Mar 20 21:14:17.259459 kubelet[2237]: I0320 21:14:17.258751 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19b8b87be66b79b451437e8a71c7897e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19b8b87be66b79b451437e8a71c7897e\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:17.259606 kubelet[2237]: I0320 21:14:17.258768 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:17.259606 kubelet[2237]: I0320 21:14:17.258789 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:17.259606 kubelet[2237]: I0320 21:14:17.258805 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:17.259606 kubelet[2237]: I0320 21:14:17.258822 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:17.259606 kubelet[2237]: I0320 21:14:17.258836 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:17.410115 kubelet[2237]: I0320 21:14:17.410083 2237 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:14:17.410478 kubelet[2237]: E0320 21:14:17.410441 2237 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 20 21:14:17.490965 containerd[1474]: time="2025-03-20T21:14:17.490908967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19b8b87be66b79b451437e8a71c7897e,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:17.491936 containerd[1474]: time="2025-03-20T21:14:17.491909207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:17.495169 containerd[1474]: time="2025-03-20T21:14:17.495142847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:17.524574 containerd[1474]: time="2025-03-20T21:14:17.524335407Z" level=info msg="connecting to shim 288d2de43c9330f1465739efccdde0dc7ca810bb8c85e2869ba27118e41593d8" address="unix:///run/containerd/s/bc355d44408db3c074e7e36ab05f8bed69304e8dbcf8948d804d3118a3efffbd" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:17.525841 containerd[1474]: time="2025-03-20T21:14:17.525192487Z" level=info msg="connecting to shim 07e75c5959b952dbb5bd76d1343d7fa6cd1191bdfb79ea10c3ee124c6f0fc473" address="unix:///run/containerd/s/47696b1ba03795433cd27dfc3f3f417a8f92f19d06977cd84580ba1a8768d2b3" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:17.526391 containerd[1474]: time="2025-03-20T21:14:17.526363247Z" level=info msg="connecting to shim 9bfcbb4e712fde1e2c224532ebbcf6238a160d6716d45ae830765464d8547e38" address="unix:///run/containerd/s/4041f30696062f5ca45f87294f8f6deda11431d3ae03c5ab7436f5165d8d5abe" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:17.548203 systemd[1]: Started cri-containerd-288d2de43c9330f1465739efccdde0dc7ca810bb8c85e2869ba27118e41593d8.scope - libcontainer container 288d2de43c9330f1465739efccdde0dc7ca810bb8c85e2869ba27118e41593d8. Mar 20 21:14:17.551760 systemd[1]: Started cri-containerd-07e75c5959b952dbb5bd76d1343d7fa6cd1191bdfb79ea10c3ee124c6f0fc473.scope - libcontainer container 07e75c5959b952dbb5bd76d1343d7fa6cd1191bdfb79ea10c3ee124c6f0fc473. Mar 20 21:14:17.552855 systemd[1]: Started cri-containerd-9bfcbb4e712fde1e2c224532ebbcf6238a160d6716d45ae830765464d8547e38.scope - libcontainer container 9bfcbb4e712fde1e2c224532ebbcf6238a160d6716d45ae830765464d8547e38. Mar 20 21:14:17.584933 containerd[1474]: time="2025-03-20T21:14:17.583756607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19b8b87be66b79b451437e8a71c7897e,Namespace:kube-system,Attempt:0,} returns sandbox id \"288d2de43c9330f1465739efccdde0dc7ca810bb8c85e2869ba27118e41593d8\"" Mar 20 21:14:17.590399 containerd[1474]: time="2025-03-20T21:14:17.590340087Z" level=info msg="CreateContainer within sandbox \"288d2de43c9330f1465739efccdde0dc7ca810bb8c85e2869ba27118e41593d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 21:14:17.590399 containerd[1474]: time="2025-03-20T21:14:17.590365967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"07e75c5959b952dbb5bd76d1343d7fa6cd1191bdfb79ea10c3ee124c6f0fc473\"" Mar 20 21:14:17.591839 containerd[1474]: time="2025-03-20T21:14:17.591537727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bfcbb4e712fde1e2c224532ebbcf6238a160d6716d45ae830765464d8547e38\"" Mar 20 21:14:17.593973 containerd[1474]: time="2025-03-20T21:14:17.593934127Z" level=info msg="CreateContainer within sandbox \"07e75c5959b952dbb5bd76d1343d7fa6cd1191bdfb79ea10c3ee124c6f0fc473\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 21:14:17.594358 containerd[1474]: time="2025-03-20T21:14:17.594323447Z" level=info msg="CreateContainer within sandbox \"9bfcbb4e712fde1e2c224532ebbcf6238a160d6716d45ae830765464d8547e38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 21:14:17.598482 containerd[1474]: time="2025-03-20T21:14:17.598455007Z" level=info msg="Container f9d998d71f025e1c66a29cd0115cf127eb362cc04777ede1d49e36d07c7b2f2e: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:17.602060 containerd[1474]: time="2025-03-20T21:14:17.602001807Z" level=info msg="Container 4106396816568704758a3961a910825401d7f94f994e1018b78363199fae83b7: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:17.606343 containerd[1474]: time="2025-03-20T21:14:17.606296967Z" level=info msg="CreateContainer within sandbox \"288d2de43c9330f1465739efccdde0dc7ca810bb8c85e2869ba27118e41593d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9d998d71f025e1c66a29cd0115cf127eb362cc04777ede1d49e36d07c7b2f2e\"" Mar 20 21:14:17.606939 containerd[1474]: time="2025-03-20T21:14:17.606908207Z" level=info msg="Container e0abc508c184aac2df4b5b4576cb0ed06ded0b3b78ed0486b776adcf47d2e8b5: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:17.608065 containerd[1474]: time="2025-03-20T21:14:17.607034087Z" level=info msg="StartContainer for \"f9d998d71f025e1c66a29cd0115cf127eb362cc04777ede1d49e36d07c7b2f2e\"" Mar 20 21:14:17.608135 containerd[1474]: time="2025-03-20T21:14:17.608062447Z" level=info msg="connecting to shim f9d998d71f025e1c66a29cd0115cf127eb362cc04777ede1d49e36d07c7b2f2e" address="unix:///run/containerd/s/bc355d44408db3c074e7e36ab05f8bed69304e8dbcf8948d804d3118a3efffbd" protocol=ttrpc version=3 Mar 20 21:14:17.610964 containerd[1474]: time="2025-03-20T21:14:17.610884127Z" level=info msg="CreateContainer within sandbox \"07e75c5959b952dbb5bd76d1343d7fa6cd1191bdfb79ea10c3ee124c6f0fc473\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4106396816568704758a3961a910825401d7f94f994e1018b78363199fae83b7\"" Mar 20 21:14:17.612745 containerd[1474]: time="2025-03-20T21:14:17.611338647Z" level=info msg="StartContainer for \"4106396816568704758a3961a910825401d7f94f994e1018b78363199fae83b7\"" Mar 20 21:14:17.612745 containerd[1474]: time="2025-03-20T21:14:17.612330127Z" level=info msg="connecting to shim 4106396816568704758a3961a910825401d7f94f994e1018b78363199fae83b7" address="unix:///run/containerd/s/47696b1ba03795433cd27dfc3f3f417a8f92f19d06977cd84580ba1a8768d2b3" protocol=ttrpc version=3 Mar 20 21:14:17.615920 containerd[1474]: time="2025-03-20T21:14:17.615867527Z" level=info msg="CreateContainer within sandbox \"9bfcbb4e712fde1e2c224532ebbcf6238a160d6716d45ae830765464d8547e38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0abc508c184aac2df4b5b4576cb0ed06ded0b3b78ed0486b776adcf47d2e8b5\"" Mar 20 21:14:17.616271 containerd[1474]: time="2025-03-20T21:14:17.616241807Z" level=info msg="StartContainer for \"e0abc508c184aac2df4b5b4576cb0ed06ded0b3b78ed0486b776adcf47d2e8b5\"" Mar 20 21:14:17.617353 containerd[1474]: time="2025-03-20T21:14:17.617323847Z" level=info msg="connecting to shim e0abc508c184aac2df4b5b4576cb0ed06ded0b3b78ed0486b776adcf47d2e8b5" address="unix:///run/containerd/s/4041f30696062f5ca45f87294f8f6deda11431d3ae03c5ab7436f5165d8d5abe" protocol=ttrpc version=3 Mar 20 21:14:17.628184 systemd[1]: Started cri-containerd-f9d998d71f025e1c66a29cd0115cf127eb362cc04777ede1d49e36d07c7b2f2e.scope - libcontainer container f9d998d71f025e1c66a29cd0115cf127eb362cc04777ede1d49e36d07c7b2f2e. Mar 20 21:14:17.631518 systemd[1]: Started cri-containerd-4106396816568704758a3961a910825401d7f94f994e1018b78363199fae83b7.scope - libcontainer container 4106396816568704758a3961a910825401d7f94f994e1018b78363199fae83b7. Mar 20 21:14:17.632347 systemd[1]: Started cri-containerd-e0abc508c184aac2df4b5b4576cb0ed06ded0b3b78ed0486b776adcf47d2e8b5.scope - libcontainer container e0abc508c184aac2df4b5b4576cb0ed06ded0b3b78ed0486b776adcf47d2e8b5. Mar 20 21:14:17.659496 kubelet[2237]: E0320 21:14:17.659439 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Mar 20 21:14:17.692473 containerd[1474]: time="2025-03-20T21:14:17.688757047Z" level=info msg="StartContainer for \"4106396816568704758a3961a910825401d7f94f994e1018b78363199fae83b7\" returns successfully" Mar 20 21:14:17.692473 containerd[1474]: time="2025-03-20T21:14:17.688905647Z" level=info msg="StartContainer for \"e0abc508c184aac2df4b5b4576cb0ed06ded0b3b78ed0486b776adcf47d2e8b5\" returns successfully" Mar 20 21:14:17.692473 containerd[1474]: time="2025-03-20T21:14:17.691090447Z" level=info msg="StartContainer for \"f9d998d71f025e1c66a29cd0115cf127eb362cc04777ede1d49e36d07c7b2f2e\" returns successfully" Mar 20 21:14:17.812084 kubelet[2237]: I0320 21:14:17.811919 2237 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:14:17.812369 kubelet[2237]: E0320 21:14:17.812299 2237 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 20 21:14:18.076978 kubelet[2237]: E0320 21:14:18.076721 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:18.081111 kubelet[2237]: E0320 21:14:18.080949 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:18.082683 kubelet[2237]: E0320 21:14:18.082662 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:18.616243 kubelet[2237]: I0320 21:14:18.616209 2237 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:14:19.084439 kubelet[2237]: E0320 21:14:19.084402 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:19.084762 kubelet[2237]: E0320 21:14:19.084502 2237 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:14:19.716767 kubelet[2237]: E0320 21:14:19.716691 2237 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 21:14:19.860547 kubelet[2237]: I0320 21:14:19.860504 2237 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 20 21:14:19.958492 kubelet[2237]: I0320 21:14:19.958421 2237 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:19.963418 kubelet[2237]: E0320 21:14:19.963386 2237 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:19.963418 kubelet[2237]: I0320 21:14:19.963411 2237 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:19.965438 kubelet[2237]: E0320 21:14:19.965407 2237 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:19.965438 kubelet[2237]: I0320 21:14:19.965430 2237 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 21:14:19.966887 kubelet[2237]: E0320 21:14:19.966759 2237 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 20 21:14:20.047202 kubelet[2237]: I0320 21:14:20.047166 2237 apiserver.go:52] "Watching apiserver" Mar 20 21:14:20.057592 kubelet[2237]: I0320 21:14:20.057528 2237 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:14:21.738754 systemd[1]: Reload requested from client PID 2504 ('systemctl') (unit session-7.scope)... Mar 20 21:14:21.738771 systemd[1]: Reloading... Mar 20 21:14:21.811073 zram_generator::config[2551]: No configuration found. Mar 20 21:14:21.889396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:14:21.974778 systemd[1]: Reloading finished in 235 ms. Mar 20 21:14:21.997110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:14:22.017136 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:14:22.017378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:14:22.017436 systemd[1]: kubelet.service: Consumed 2.177s CPU time, 127.4M memory peak. Mar 20 21:14:22.019150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:14:22.147009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:14:22.151032 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:14:22.186698 kubelet[2590]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:14:22.186698 kubelet[2590]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 20 21:14:22.186698 kubelet[2590]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:14:22.186698 kubelet[2590]: I0320 21:14:22.185698 2590 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:14:22.191230 kubelet[2590]: I0320 21:14:22.191195 2590 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 20 21:14:22.191230 kubelet[2590]: I0320 21:14:22.191220 2590 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:14:22.191453 kubelet[2590]: I0320 21:14:22.191437 2590 server.go:954] "Client rotation is on, will bootstrap in background" Mar 20 21:14:22.193492 kubelet[2590]: I0320 21:14:22.193363 2590 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 21:14:22.195668 kubelet[2590]: I0320 21:14:22.195602 2590 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:14:22.198999 kubelet[2590]: I0320 21:14:22.198947 2590 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:14:22.201514 kubelet[2590]: I0320 21:14:22.201494 2590 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:14:22.201720 kubelet[2590]: I0320 21:14:22.201696 2590 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:14:22.201874 kubelet[2590]: I0320 21:14:22.201722 2590 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:14:22.201948 kubelet[2590]: I0320 21:14:22.201883 2590 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:14:22.201948 kubelet[2590]: I0320 21:14:22.201891 2590 container_manager_linux.go:304] "Creating device plugin manager" Mar 20 21:14:22.201948 kubelet[2590]: I0320 21:14:22.201931 2590 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:14:22.202118 kubelet[2590]: I0320 21:14:22.202106 2590 kubelet.go:446] "Attempting to sync node with API server" Mar 20 21:14:22.202154 kubelet[2590]: I0320 21:14:22.202126 2590 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:14:22.202154 kubelet[2590]: I0320 21:14:22.202148 2590 kubelet.go:352] "Adding apiserver pod source" Mar 20 21:14:22.202212 kubelet[2590]: I0320 21:14:22.202163 2590 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:14:22.203468 kubelet[2590]: I0320 21:14:22.202646 2590 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:14:22.203468 kubelet[2590]: I0320 21:14:22.203091 2590 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:14:22.204796 kubelet[2590]: I0320 21:14:22.204774 2590 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 20 21:14:22.205027 kubelet[2590]: I0320 21:14:22.205013 2590 server.go:1287] "Started kubelet" Mar 20 21:14:22.207914 kubelet[2590]: I0320 21:14:22.207673 2590 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:14:22.208513 kubelet[2590]: I0320 21:14:22.208463 2590 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:14:22.208694 kubelet[2590]: I0320 21:14:22.208672 2590 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:14:22.208896 kubelet[2590]: I0320 21:14:22.208867 2590 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:14:22.209925 kubelet[2590]: I0320 21:14:22.209532 2590 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:14:22.211126 kubelet[2590]: I0320 21:14:22.210729 2590 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 20 21:14:22.211126 kubelet[2590]: E0320 21:14:22.210833 2590 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:14:22.211331 kubelet[2590]: I0320 21:14:22.211307 2590 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:14:22.211435 kubelet[2590]: I0320 21:14:22.211421 2590 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:14:22.211510 kubelet[2590]: I0320 21:14:22.211497 2590 server.go:490] "Adding debug handlers to kubelet server" Mar 20 21:14:22.226693 kubelet[2590]: I0320 21:14:22.226530 2590 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:14:22.226693 kubelet[2590]: I0320 21:14:22.226553 2590 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:14:22.226693 kubelet[2590]: I0320 21:14:22.226624 2590 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:14:22.228165 kubelet[2590]: E0320 21:14:22.227794 2590 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:14:22.232613 kubelet[2590]: I0320 21:14:22.232434 2590 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:14:22.235674 kubelet[2590]: I0320 21:14:22.235514 2590 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:14:22.235674 kubelet[2590]: I0320 21:14:22.235676 2590 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 20 21:14:22.235779 kubelet[2590]: I0320 21:14:22.235695 2590 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 20 21:14:22.235779 kubelet[2590]: I0320 21:14:22.235701 2590 kubelet.go:2388] "Starting kubelet main sync loop" Mar 20 21:14:22.235928 kubelet[2590]: E0320 21:14:22.235904 2590 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:14:22.256926 kubelet[2590]: I0320 21:14:22.256844 2590 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 20 21:14:22.256926 kubelet[2590]: I0320 21:14:22.256867 2590 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 20 21:14:22.256926 kubelet[2590]: I0320 21:14:22.256887 2590 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:14:22.257058 kubelet[2590]: I0320 21:14:22.257012 2590 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 21:14:22.257058 kubelet[2590]: I0320 21:14:22.257023 2590 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 21:14:22.257058 kubelet[2590]: I0320 21:14:22.257057 2590 policy_none.go:49] "None policy: Start" Mar 20 21:14:22.257135 kubelet[2590]: I0320 21:14:22.257065 2590 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 20 21:14:22.257135 kubelet[2590]: I0320 21:14:22.257076 2590 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:14:22.257181 kubelet[2590]: I0320 21:14:22.257165 2590 state_mem.go:75] "Updated machine memory state" Mar 20 21:14:22.261325 kubelet[2590]: I0320 21:14:22.261305 2590 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:14:22.261790 kubelet[2590]: I0320 21:14:22.261580 2590 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:14:22.261790 kubelet[2590]: I0320 21:14:22.261606 2590 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:14:22.262420 kubelet[2590]: I0320 21:14:22.262394 2590 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:14:22.262725 kubelet[2590]: E0320 21:14:22.262705 2590 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 20 21:14:22.337057 kubelet[2590]: I0320 21:14:22.337022 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:22.337163 kubelet[2590]: I0320 21:14:22.337142 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 21:14:22.337836 kubelet[2590]: I0320 21:14:22.337450 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:22.363979 kubelet[2590]: I0320 21:14:22.363949 2590 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:14:22.369064 kubelet[2590]: I0320 21:14:22.369026 2590 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 20 21:14:22.369064 kubelet[2590]: I0320 21:14:22.369134 2590 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 20 21:14:22.513056 kubelet[2590]: I0320 21:14:22.512913 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19b8b87be66b79b451437e8a71c7897e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19b8b87be66b79b451437e8a71c7897e\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:22.513056 kubelet[2590]: I0320 21:14:22.512949 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19b8b87be66b79b451437e8a71c7897e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19b8b87be66b79b451437e8a71c7897e\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:22.513056 kubelet[2590]: I0320 21:14:22.512968 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:22.513056 kubelet[2590]: I0320 21:14:22.512985 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:22.513056 kubelet[2590]: I0320 21:14:22.513001 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:14:22.513267 kubelet[2590]: I0320 21:14:22.513015 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19b8b87be66b79b451437e8a71c7897e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19b8b87be66b79b451437e8a71c7897e\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:22.513267 kubelet[2590]: I0320 21:14:22.513031 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:22.513267 kubelet[2590]: I0320 21:14:22.513063 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:22.513267 kubelet[2590]: I0320 21:14:22.513080 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:23.204991 kubelet[2590]: I0320 21:14:23.204953 2590 apiserver.go:52] "Watching apiserver" Mar 20 21:14:23.212614 kubelet[2590]: I0320 21:14:23.212569 2590 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:14:23.248393 kubelet[2590]: I0320 21:14:23.248355 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:23.248629 kubelet[2590]: I0320 21:14:23.248609 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:23.256961 kubelet[2590]: E0320 21:14:23.256815 2590 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:14:23.257905 kubelet[2590]: E0320 21:14:23.257203 2590 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 21:14:23.280055 kubelet[2590]: I0320 21:14:23.279981 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.279963171 podStartE2EDuration="1.279963171s" podCreationTimestamp="2025-03-20 21:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:14:23.279523593 +0000 UTC m=+1.125613652" watchObservedRunningTime="2025-03-20 21:14:23.279963171 +0000 UTC m=+1.126053270" Mar 20 21:14:23.280200 kubelet[2590]: I0320 21:14:23.280116 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.280111257 podStartE2EDuration="1.280111257s" podCreationTimestamp="2025-03-20 21:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:14:23.272775433 +0000 UTC m=+1.118865532" watchObservedRunningTime="2025-03-20 21:14:23.280111257 +0000 UTC m=+1.126201356" Mar 20 21:14:23.309557 kubelet[2590]: I0320 21:14:23.309446 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.309428194 podStartE2EDuration="1.309428194s" podCreationTimestamp="2025-03-20 21:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:14:23.287955423 +0000 UTC m=+1.134045522" watchObservedRunningTime="2025-03-20 21:14:23.309428194 +0000 UTC m=+1.155518293" Mar 20 21:14:26.795757 sudo[1677]: pam_unix(sudo:session): session closed for user root Mar 20 21:14:26.797173 sshd[1676]: Connection closed by 10.0.0.1 port 35258 Mar 20 21:14:26.797779 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Mar 20 21:14:26.801133 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:35258.service: Deactivated successfully. Mar 20 21:14:26.803656 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:14:26.803841 systemd[1]: session-7.scope: Consumed 8.445s CPU time, 228.4M memory peak. Mar 20 21:14:26.804708 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:14:26.805484 systemd-logind[1460]: Removed session 7. Mar 20 21:14:28.394368 kubelet[2590]: I0320 21:14:28.394336 2590 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 21:14:28.395136 containerd[1474]: time="2025-03-20T21:14:28.395090587Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:14:28.395440 kubelet[2590]: I0320 21:14:28.395293 2590 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 21:14:29.100444 systemd[1]: Created slice kubepods-besteffort-pod040fcb21_c683_475a_8784_c4de2723a2af.slice - libcontainer container kubepods-besteffort-pod040fcb21_c683_475a_8784_c4de2723a2af.slice. Mar 20 21:14:29.158725 kubelet[2590]: I0320 21:14:29.158676 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/040fcb21-c683-475a-8784-c4de2723a2af-xtables-lock\") pod \"kube-proxy-fx85q\" (UID: \"040fcb21-c683-475a-8784-c4de2723a2af\") " pod="kube-system/kube-proxy-fx85q" Mar 20 21:14:29.158725 kubelet[2590]: I0320 21:14:29.158723 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/040fcb21-c683-475a-8784-c4de2723a2af-lib-modules\") pod \"kube-proxy-fx85q\" (UID: \"040fcb21-c683-475a-8784-c4de2723a2af\") " pod="kube-system/kube-proxy-fx85q" Mar 20 21:14:29.158881 kubelet[2590]: I0320 21:14:29.158745 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/040fcb21-c683-475a-8784-c4de2723a2af-kube-proxy\") pod \"kube-proxy-fx85q\" (UID: \"040fcb21-c683-475a-8784-c4de2723a2af\") " pod="kube-system/kube-proxy-fx85q" Mar 20 21:14:29.158881 kubelet[2590]: I0320 21:14:29.158762 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pbv9\" (UniqueName: \"kubernetes.io/projected/040fcb21-c683-475a-8784-c4de2723a2af-kube-api-access-5pbv9\") pod \"kube-proxy-fx85q\" (UID: \"040fcb21-c683-475a-8784-c4de2723a2af\") " pod="kube-system/kube-proxy-fx85q" Mar 20 21:14:29.271093 kubelet[2590]: E0320 21:14:29.271026 2590 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 21:14:29.271093 kubelet[2590]: E0320 21:14:29.271086 2590 projected.go:194] Error preparing data for projected volume kube-api-access-5pbv9 for pod kube-system/kube-proxy-fx85q: configmap "kube-root-ca.crt" not found Mar 20 21:14:29.271238 kubelet[2590]: E0320 21:14:29.271143 2590 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/040fcb21-c683-475a-8784-c4de2723a2af-kube-api-access-5pbv9 podName:040fcb21-c683-475a-8784-c4de2723a2af nodeName:}" failed. No retries permitted until 2025-03-20 21:14:29.771124883 +0000 UTC m=+7.617214982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5pbv9" (UniqueName: "kubernetes.io/projected/040fcb21-c683-475a-8784-c4de2723a2af-kube-api-access-5pbv9") pod "kube-proxy-fx85q" (UID: "040fcb21-c683-475a-8784-c4de2723a2af") : configmap "kube-root-ca.crt" not found Mar 20 21:14:29.450281 systemd[1]: Created slice kubepods-besteffort-pod4c61a616_59b1_44b5_8048_9faf10dd9d3c.slice - libcontainer container kubepods-besteffort-pod4c61a616_59b1_44b5_8048_9faf10dd9d3c.slice. Mar 20 21:14:29.460498 kubelet[2590]: I0320 21:14:29.460462 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvj8\" (UniqueName: \"kubernetes.io/projected/4c61a616-59b1-44b5-8048-9faf10dd9d3c-kube-api-access-8wvj8\") pod \"tigera-operator-ccfc44587-zjx9t\" (UID: \"4c61a616-59b1-44b5-8048-9faf10dd9d3c\") " pod="tigera-operator/tigera-operator-ccfc44587-zjx9t" Mar 20 21:14:29.460498 kubelet[2590]: I0320 21:14:29.460500 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c61a616-59b1-44b5-8048-9faf10dd9d3c-var-lib-calico\") pod \"tigera-operator-ccfc44587-zjx9t\" (UID: \"4c61a616-59b1-44b5-8048-9faf10dd9d3c\") " pod="tigera-operator/tigera-operator-ccfc44587-zjx9t" Mar 20 21:14:29.754472 containerd[1474]: time="2025-03-20T21:14:29.754368978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-zjx9t,Uid:4c61a616-59b1-44b5-8048-9faf10dd9d3c,Namespace:tigera-operator,Attempt:0,}" Mar 20 21:14:29.768767 containerd[1474]: time="2025-03-20T21:14:29.768681781Z" level=info msg="connecting to shim 9f499202c3ae19539617d91397ac2d66a3e5913d0f35f3f9845c69cbef13f42a" address="unix:///run/containerd/s/7e802170829190e51143de997697f9b2d45e6392b69a0b15c3dfa56e6fa3402b" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:29.790208 systemd[1]: Started cri-containerd-9f499202c3ae19539617d91397ac2d66a3e5913d0f35f3f9845c69cbef13f42a.scope - libcontainer container 9f499202c3ae19539617d91397ac2d66a3e5913d0f35f3f9845c69cbef13f42a. Mar 20 21:14:29.819392 containerd[1474]: time="2025-03-20T21:14:29.819345408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-zjx9t,Uid:4c61a616-59b1-44b5-8048-9faf10dd9d3c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9f499202c3ae19539617d91397ac2d66a3e5913d0f35f3f9845c69cbef13f42a\"" Mar 20 21:14:29.821084 containerd[1474]: time="2025-03-20T21:14:29.821007615Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 20 21:14:30.007827 containerd[1474]: time="2025-03-20T21:14:30.007717987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx85q,Uid:040fcb21-c683-475a-8784-c4de2723a2af,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:30.023596 containerd[1474]: time="2025-03-20T21:14:30.023524245Z" level=info msg="connecting to shim baa877a5a8117e8cf00d6e6b59df980c181ca3a013120433d218d5024175ea6b" address="unix:///run/containerd/s/0dac994cdb114bd95f782279be4029e5b72d24eef327fbebcf4a4794a49f0032" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:30.046205 systemd[1]: Started cri-containerd-baa877a5a8117e8cf00d6e6b59df980c181ca3a013120433d218d5024175ea6b.scope - libcontainer container baa877a5a8117e8cf00d6e6b59df980c181ca3a013120433d218d5024175ea6b. Mar 20 21:14:30.066147 containerd[1474]: time="2025-03-20T21:14:30.066108729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx85q,Uid:040fcb21-c683-475a-8784-c4de2723a2af,Namespace:kube-system,Attempt:0,} returns sandbox id \"baa877a5a8117e8cf00d6e6b59df980c181ca3a013120433d218d5024175ea6b\"" Mar 20 21:14:30.068375 containerd[1474]: time="2025-03-20T21:14:30.068346028Z" level=info msg="CreateContainer within sandbox \"baa877a5a8117e8cf00d6e6b59df980c181ca3a013120433d218d5024175ea6b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:14:30.082483 containerd[1474]: time="2025-03-20T21:14:30.081426054Z" level=info msg="Container dc9995a67738a6d08fcb7b8f3ab95faa01f2396b71fa6c3b518635c6d8b07e15: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:30.087406 containerd[1474]: time="2025-03-20T21:14:30.087369771Z" level=info msg="CreateContainer within sandbox \"baa877a5a8117e8cf00d6e6b59df980c181ca3a013120433d218d5024175ea6b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc9995a67738a6d08fcb7b8f3ab95faa01f2396b71fa6c3b518635c6d8b07e15\"" Mar 20 21:14:30.088412 containerd[1474]: time="2025-03-20T21:14:30.088242274Z" level=info msg="StartContainer for \"dc9995a67738a6d08fcb7b8f3ab95faa01f2396b71fa6c3b518635c6d8b07e15\"" Mar 20 21:14:30.089652 containerd[1474]: time="2025-03-20T21:14:30.089608190Z" level=info msg="connecting to shim dc9995a67738a6d08fcb7b8f3ab95faa01f2396b71fa6c3b518635c6d8b07e15" address="unix:///run/containerd/s/0dac994cdb114bd95f782279be4029e5b72d24eef327fbebcf4a4794a49f0032" protocol=ttrpc version=3 Mar 20 21:14:30.107197 systemd[1]: Started cri-containerd-dc9995a67738a6d08fcb7b8f3ab95faa01f2396b71fa6c3b518635c6d8b07e15.scope - libcontainer container dc9995a67738a6d08fcb7b8f3ab95faa01f2396b71fa6c3b518635c6d8b07e15. Mar 20 21:14:30.138920 containerd[1474]: time="2025-03-20T21:14:30.138876171Z" level=info msg="StartContainer for \"dc9995a67738a6d08fcb7b8f3ab95faa01f2396b71fa6c3b518635c6d8b07e15\" returns successfully" Mar 20 21:14:30.272351 kubelet[2590]: I0320 21:14:30.272213 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fx85q" podStartSLOduration=1.272194452 podStartE2EDuration="1.272194452s" podCreationTimestamp="2025-03-20 21:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:14:30.272170532 +0000 UTC m=+8.118260631" watchObservedRunningTime="2025-03-20 21:14:30.272194452 +0000 UTC m=+8.118284551" Mar 20 21:14:31.156415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055097459.mount: Deactivated successfully. Mar 20 21:14:31.401146 containerd[1474]: time="2025-03-20T21:14:31.401094608Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:31.401805 containerd[1474]: time="2025-03-20T21:14:31.401702423Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 20 21:14:31.402378 containerd[1474]: time="2025-03-20T21:14:31.402344959Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:31.404646 containerd[1474]: time="2025-03-20T21:14:31.404613135Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:31.405276 containerd[1474]: time="2025-03-20T21:14:31.405241311Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 1.584197975s" Mar 20 21:14:31.405321 containerd[1474]: time="2025-03-20T21:14:31.405277072Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 20 21:14:31.413640 containerd[1474]: time="2025-03-20T21:14:31.413553557Z" level=info msg="CreateContainer within sandbox \"9f499202c3ae19539617d91397ac2d66a3e5913d0f35f3f9845c69cbef13f42a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 20 21:14:31.418547 containerd[1474]: time="2025-03-20T21:14:31.418488359Z" level=info msg="Container f31b613bb0c4d9c31160840f533a809ac6c94faf9bc41261f3f0193e73a32272: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:31.424949 containerd[1474]: time="2025-03-20T21:14:31.424911118Z" level=info msg="CreateContainer within sandbox \"9f499202c3ae19539617d91397ac2d66a3e5913d0f35f3f9845c69cbef13f42a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f31b613bb0c4d9c31160840f533a809ac6c94faf9bc41261f3f0193e73a32272\"" Mar 20 21:14:31.425579 containerd[1474]: time="2025-03-20T21:14:31.425546974Z" level=info msg="StartContainer for \"f31b613bb0c4d9c31160840f533a809ac6c94faf9bc41261f3f0193e73a32272\"" Mar 20 21:14:31.426351 containerd[1474]: time="2025-03-20T21:14:31.426328713Z" level=info msg="connecting to shim f31b613bb0c4d9c31160840f533a809ac6c94faf9bc41261f3f0193e73a32272" address="unix:///run/containerd/s/7e802170829190e51143de997697f9b2d45e6392b69a0b15c3dfa56e6fa3402b" protocol=ttrpc version=3 Mar 20 21:14:31.476239 systemd[1]: Started cri-containerd-f31b613bb0c4d9c31160840f533a809ac6c94faf9bc41261f3f0193e73a32272.scope - libcontainer container f31b613bb0c4d9c31160840f533a809ac6c94faf9bc41261f3f0193e73a32272. Mar 20 21:14:31.499414 containerd[1474]: time="2025-03-20T21:14:31.499377642Z" level=info msg="StartContainer for \"f31b613bb0c4d9c31160840f533a809ac6c94faf9bc41261f3f0193e73a32272\" returns successfully" Mar 20 21:14:31.552158 update_engine[1462]: I20250320 21:14:31.552086 1462 update_attempter.cc:509] Updating boot flags... Mar 20 21:14:31.579101 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2980) Mar 20 21:14:31.632099 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2981) Mar 20 21:14:35.706423 kubelet[2590]: I0320 21:14:35.705974 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-ccfc44587-zjx9t" podStartSLOduration=5.117604934 podStartE2EDuration="6.705958693s" podCreationTimestamp="2025-03-20 21:14:29 +0000 UTC" firstStartedPulling="2025-03-20 21:14:29.820653205 +0000 UTC m=+7.666743264" lastFinishedPulling="2025-03-20 21:14:31.409006924 +0000 UTC m=+9.255097023" observedRunningTime="2025-03-20 21:14:32.279219963 +0000 UTC m=+10.125310062" watchObservedRunningTime="2025-03-20 21:14:35.705958693 +0000 UTC m=+13.552048792" Mar 20 21:14:35.715412 systemd[1]: Created slice kubepods-besteffort-pod47c689b1_93e5_4356_abff_889120b39427.slice - libcontainer container kubepods-besteffort-pod47c689b1_93e5_4356_abff_889120b39427.slice. Mar 20 21:14:35.808817 kubelet[2590]: I0320 21:14:35.808780 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbqn\" (UniqueName: \"kubernetes.io/projected/47c689b1-93e5-4356-abff-889120b39427-kube-api-access-6bbqn\") pod \"calico-typha-5bf57668c8-cxmkc\" (UID: \"47c689b1-93e5-4356-abff-889120b39427\") " pod="calico-system/calico-typha-5bf57668c8-cxmkc" Mar 20 21:14:35.808817 kubelet[2590]: I0320 21:14:35.808826 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/47c689b1-93e5-4356-abff-889120b39427-typha-certs\") pod \"calico-typha-5bf57668c8-cxmkc\" (UID: \"47c689b1-93e5-4356-abff-889120b39427\") " pod="calico-system/calico-typha-5bf57668c8-cxmkc" Mar 20 21:14:35.809016 kubelet[2590]: I0320 21:14:35.808846 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c689b1-93e5-4356-abff-889120b39427-tigera-ca-bundle\") pod \"calico-typha-5bf57668c8-cxmkc\" (UID: \"47c689b1-93e5-4356-abff-889120b39427\") " pod="calico-system/calico-typha-5bf57668c8-cxmkc" Mar 20 21:14:35.904029 systemd[1]: Created slice kubepods-besteffort-podcebbf482_a8dd_459f_a959_e58da867deef.slice - libcontainer container kubepods-besteffort-podcebbf482_a8dd_459f_a959_e58da867deef.slice. Mar 20 21:14:35.909569 kubelet[2590]: I0320 21:14:35.909532 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-var-run-calico\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909569 kubelet[2590]: I0320 21:14:35.909574 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-xtables-lock\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909569 kubelet[2590]: I0320 21:14:35.909593 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebbf482-a8dd-459f-a959-e58da867deef-tigera-ca-bundle\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909569 kubelet[2590]: I0320 21:14:35.909608 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-var-lib-calico\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909791 kubelet[2590]: I0320 21:14:35.909625 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-cni-bin-dir\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909791 kubelet[2590]: I0320 21:14:35.909640 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-cni-log-dir\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909791 kubelet[2590]: I0320 21:14:35.909664 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vds9c\" (UniqueName: \"kubernetes.io/projected/cebbf482-a8dd-459f-a959-e58da867deef-kube-api-access-vds9c\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909791 kubelet[2590]: I0320 21:14:35.909688 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-cni-net-dir\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909791 kubelet[2590]: I0320 21:14:35.909706 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-flexvol-driver-host\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909893 kubelet[2590]: I0320 21:14:35.909732 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-lib-modules\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909893 kubelet[2590]: I0320 21:14:35.909757 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cebbf482-a8dd-459f-a959-e58da867deef-node-certs\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:35.909893 kubelet[2590]: I0320 21:14:35.909782 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cebbf482-a8dd-459f-a959-e58da867deef-policysync\") pod \"calico-node-j8zj4\" (UID: \"cebbf482-a8dd-459f-a959-e58da867deef\") " pod="calico-system/calico-node-j8zj4" Mar 20 21:14:36.015150 kubelet[2590]: E0320 21:14:36.014121 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.015150 kubelet[2590]: W0320 21:14:36.014147 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.015150 kubelet[2590]: E0320 21:14:36.014166 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.019942 containerd[1474]: time="2025-03-20T21:14:36.019716116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bf57668c8-cxmkc,Uid:47c689b1-93e5-4356-abff-889120b39427,Namespace:calico-system,Attempt:0,}" Mar 20 21:14:36.022599 kubelet[2590]: E0320 21:14:36.022440 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.022599 kubelet[2590]: W0320 21:14:36.022460 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.022599 kubelet[2590]: E0320 21:14:36.022476 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.058930 containerd[1474]: time="2025-03-20T21:14:36.058889178Z" level=info msg="connecting to shim 70f353f9cc0f6faca23f842edd3dc4e05a068fedaeb7306f9f55c69607c0a5d3" address="unix:///run/containerd/s/1fbd1ffc52ac1fab01f90b28926691ec3175593f5bd3c6b77319f58888f4c680" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:36.080216 systemd[1]: Started cri-containerd-70f353f9cc0f6faca23f842edd3dc4e05a068fedaeb7306f9f55c69607c0a5d3.scope - libcontainer container 70f353f9cc0f6faca23f842edd3dc4e05a068fedaeb7306f9f55c69607c0a5d3. Mar 20 21:14:36.096982 kubelet[2590]: E0320 21:14:36.096906 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnwc" podUID="26fc4c15-2d42-473b-98f6-70fe4b1ea3e3" Mar 20 21:14:36.102917 kubelet[2590]: E0320 21:14:36.100785 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.102917 kubelet[2590]: W0320 21:14:36.100818 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.102917 kubelet[2590]: E0320 21:14:36.100841 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.102917 kubelet[2590]: E0320 21:14:36.101148 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.109580 kubelet[2590]: W0320 21:14:36.101158 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.109580 kubelet[2590]: E0320 21:14:36.109581 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.109979 kubelet[2590]: E0320 21:14:36.109952 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.109979 kubelet[2590]: W0320 21:14:36.109967 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.110083 kubelet[2590]: E0320 21:14:36.109978 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.110307 kubelet[2590]: E0320 21:14:36.110285 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.110354 kubelet[2590]: W0320 21:14:36.110325 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.110354 kubelet[2590]: E0320 21:14:36.110338 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.110651 kubelet[2590]: E0320 21:14:36.110632 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.110732 kubelet[2590]: W0320 21:14:36.110646 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.110765 kubelet[2590]: E0320 21:14:36.110730 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.111015 kubelet[2590]: E0320 21:14:36.110999 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.111015 kubelet[2590]: W0320 21:14:36.111011 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.111107 kubelet[2590]: E0320 21:14:36.111022 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.111371 kubelet[2590]: E0320 21:14:36.111353 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.111371 kubelet[2590]: W0320 21:14:36.111366 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.111442 kubelet[2590]: E0320 21:14:36.111376 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.111577 kubelet[2590]: E0320 21:14:36.111563 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.111577 kubelet[2590]: W0320 21:14:36.111575 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.111629 kubelet[2590]: E0320 21:14:36.111584 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.111755 kubelet[2590]: E0320 21:14:36.111739 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.111755 kubelet[2590]: W0320 21:14:36.111749 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.111806 kubelet[2590]: E0320 21:14:36.111757 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.111892 kubelet[2590]: E0320 21:14:36.111877 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.111892 kubelet[2590]: W0320 21:14:36.111886 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.111944 kubelet[2590]: E0320 21:14:36.111893 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.112025 kubelet[2590]: E0320 21:14:36.112011 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.112025 kubelet[2590]: W0320 21:14:36.112020 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.112094 kubelet[2590]: E0320 21:14:36.112027 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.112191 kubelet[2590]: E0320 21:14:36.112174 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.112191 kubelet[2590]: W0320 21:14:36.112184 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.112191 kubelet[2590]: E0320 21:14:36.112191 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.112351 kubelet[2590]: E0320 21:14:36.112336 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.112351 kubelet[2590]: W0320 21:14:36.112346 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.112403 kubelet[2590]: E0320 21:14:36.112353 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.112492 kubelet[2590]: E0320 21:14:36.112475 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.112492 kubelet[2590]: W0320 21:14:36.112485 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.112492 kubelet[2590]: E0320 21:14:36.112492 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.112621 kubelet[2590]: E0320 21:14:36.112607 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.112621 kubelet[2590]: W0320 21:14:36.112615 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.112672 kubelet[2590]: E0320 21:14:36.112622 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.112759 kubelet[2590]: E0320 21:14:36.112746 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.112759 kubelet[2590]: W0320 21:14:36.112754 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.112809 kubelet[2590]: E0320 21:14:36.112761 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.112909 kubelet[2590]: E0320 21:14:36.112894 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.112909 kubelet[2590]: W0320 21:14:36.112903 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.112909 kubelet[2590]: E0320 21:14:36.112910 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.113037 kubelet[2590]: E0320 21:14:36.113022 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.113037 kubelet[2590]: W0320 21:14:36.113030 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.113091 kubelet[2590]: E0320 21:14:36.113037 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.113191 kubelet[2590]: E0320 21:14:36.113177 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.113191 kubelet[2590]: W0320 21:14:36.113185 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.113290 kubelet[2590]: E0320 21:14:36.113192 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.113327 kubelet[2590]: E0320 21:14:36.113311 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.113327 kubelet[2590]: W0320 21:14:36.113320 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.113327 kubelet[2590]: E0320 21:14:36.113326 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.113565 kubelet[2590]: E0320 21:14:36.113549 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.113565 kubelet[2590]: W0320 21:14:36.113560 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.113623 kubelet[2590]: E0320 21:14:36.113567 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.113623 kubelet[2590]: I0320 21:14:36.113590 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26fc4c15-2d42-473b-98f6-70fe4b1ea3e3-kubelet-dir\") pod \"csi-node-driver-ppnwc\" (UID: \"26fc4c15-2d42-473b-98f6-70fe4b1ea3e3\") " pod="calico-system/csi-node-driver-ppnwc" Mar 20 21:14:36.113768 kubelet[2590]: E0320 21:14:36.113725 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.113768 kubelet[2590]: W0320 21:14:36.113763 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.113820 kubelet[2590]: E0320 21:14:36.113776 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.113820 kubelet[2590]: I0320 21:14:36.113789 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/26fc4c15-2d42-473b-98f6-70fe4b1ea3e3-registration-dir\") pod \"csi-node-driver-ppnwc\" (UID: \"26fc4c15-2d42-473b-98f6-70fe4b1ea3e3\") " pod="calico-system/csi-node-driver-ppnwc" Mar 20 21:14:36.113936 kubelet[2590]: E0320 21:14:36.113920 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.113936 kubelet[2590]: W0320 21:14:36.113930 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.113988 kubelet[2590]: E0320 21:14:36.113942 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.113988 kubelet[2590]: I0320 21:14:36.113955 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/26fc4c15-2d42-473b-98f6-70fe4b1ea3e3-socket-dir\") pod \"csi-node-driver-ppnwc\" (UID: \"26fc4c15-2d42-473b-98f6-70fe4b1ea3e3\") " pod="calico-system/csi-node-driver-ppnwc" Mar 20 21:14:36.114153 kubelet[2590]: E0320 21:14:36.114136 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.114153 kubelet[2590]: W0320 21:14:36.114150 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.114215 kubelet[2590]: E0320 21:14:36.114162 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.114215 kubelet[2590]: I0320 21:14:36.114176 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjp4\" (UniqueName: \"kubernetes.io/projected/26fc4c15-2d42-473b-98f6-70fe4b1ea3e3-kube-api-access-mfjp4\") pod \"csi-node-driver-ppnwc\" (UID: \"26fc4c15-2d42-473b-98f6-70fe4b1ea3e3\") " pod="calico-system/csi-node-driver-ppnwc" Mar 20 21:14:36.114344 kubelet[2590]: E0320 21:14:36.114328 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.114344 kubelet[2590]: W0320 21:14:36.114338 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.114391 kubelet[2590]: E0320 21:14:36.114352 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.114391 kubelet[2590]: I0320 21:14:36.114366 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/26fc4c15-2d42-473b-98f6-70fe4b1ea3e3-varrun\") pod \"csi-node-driver-ppnwc\" (UID: \"26fc4c15-2d42-473b-98f6-70fe4b1ea3e3\") " pod="calico-system/csi-node-driver-ppnwc" Mar 20 21:14:36.114526 kubelet[2590]: E0320 21:14:36.114510 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.114526 kubelet[2590]: W0320 21:14:36.114521 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.114576 kubelet[2590]: E0320 21:14:36.114533 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.114669 kubelet[2590]: E0320 21:14:36.114655 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.114669 kubelet[2590]: W0320 21:14:36.114664 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.114721 kubelet[2590]: E0320 21:14:36.114674 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.114813 kubelet[2590]: E0320 21:14:36.114800 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.114813 kubelet[2590]: W0320 21:14:36.114811 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.114863 kubelet[2590]: E0320 21:14:36.114822 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.114961 kubelet[2590]: E0320 21:14:36.114948 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.114961 kubelet[2590]: W0320 21:14:36.114956 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.115009 kubelet[2590]: E0320 21:14:36.114981 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.115114 kubelet[2590]: E0320 21:14:36.115101 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.115114 kubelet[2590]: W0320 21:14:36.115111 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.115171 kubelet[2590]: E0320 21:14:36.115128 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.115255 kubelet[2590]: E0320 21:14:36.115241 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.115255 kubelet[2590]: W0320 21:14:36.115249 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.115308 kubelet[2590]: E0320 21:14:36.115268 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.115385 kubelet[2590]: E0320 21:14:36.115372 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.115385 kubelet[2590]: W0320 21:14:36.115380 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.115441 kubelet[2590]: E0320 21:14:36.115391 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.116063 kubelet[2590]: E0320 21:14:36.115524 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.116063 kubelet[2590]: W0320 21:14:36.115533 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.116063 kubelet[2590]: E0320 21:14:36.115540 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.116063 kubelet[2590]: E0320 21:14:36.115672 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.116063 kubelet[2590]: W0320 21:14:36.115679 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.116063 kubelet[2590]: E0320 21:14:36.115685 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.116063 kubelet[2590]: E0320 21:14:36.115829 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.116063 kubelet[2590]: W0320 21:14:36.115835 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.116063 kubelet[2590]: E0320 21:14:36.115841 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.149418 containerd[1474]: time="2025-03-20T21:14:36.149372521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bf57668c8-cxmkc,Uid:47c689b1-93e5-4356-abff-889120b39427,Namespace:calico-system,Attempt:0,} returns sandbox id \"70f353f9cc0f6faca23f842edd3dc4e05a068fedaeb7306f9f55c69607c0a5d3\"" Mar 20 21:14:36.152281 containerd[1474]: time="2025-03-20T21:14:36.152144171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 20 21:14:36.207389 containerd[1474]: time="2025-03-20T21:14:36.207352361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j8zj4,Uid:cebbf482-a8dd-459f-a959-e58da867deef,Namespace:calico-system,Attempt:0,}" Mar 20 21:14:36.215710 kubelet[2590]: E0320 21:14:36.215674 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.215710 kubelet[2590]: W0320 21:14:36.215700 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.216020 kubelet[2590]: E0320 21:14:36.215726 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.216020 kubelet[2590]: E0320 21:14:36.215915 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.216020 kubelet[2590]: W0320 21:14:36.215924 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.216020 kubelet[2590]: E0320 21:14:36.215937 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.216405 kubelet[2590]: E0320 21:14:36.216295 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.216405 kubelet[2590]: W0320 21:14:36.216319 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.216405 kubelet[2590]: E0320 21:14:36.216339 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.216663 kubelet[2590]: E0320 21:14:36.216583 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.216663 kubelet[2590]: W0320 21:14:36.216594 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.216663 kubelet[2590]: E0320 21:14:36.216626 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.217012 kubelet[2590]: E0320 21:14:36.216885 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.217012 kubelet[2590]: W0320 21:14:36.216896 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.217012 kubelet[2590]: E0320 21:14:36.216913 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.217333 kubelet[2590]: E0320 21:14:36.217278 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.217333 kubelet[2590]: W0320 21:14:36.217291 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.217333 kubelet[2590]: E0320 21:14:36.217325 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.217682 kubelet[2590]: E0320 21:14:36.217590 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.217682 kubelet[2590]: W0320 21:14:36.217602 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.217682 kubelet[2590]: E0320 21:14:36.217633 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.217943 kubelet[2590]: E0320 21:14:36.217831 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.217943 kubelet[2590]: W0320 21:14:36.217843 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.217943 kubelet[2590]: E0320 21:14:36.217857 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.218128 kubelet[2590]: E0320 21:14:36.218103 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.218128 kubelet[2590]: W0320 21:14:36.218115 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.218301 kubelet[2590]: E0320 21:14:36.218206 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.218404 kubelet[2590]: E0320 21:14:36.218393 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.218520 kubelet[2590]: W0320 21:14:36.218462 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.218520 kubelet[2590]: E0320 21:14:36.218484 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.218762 kubelet[2590]: E0320 21:14:36.218747 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.218806 kubelet[2590]: W0320 21:14:36.218763 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.218806 kubelet[2590]: E0320 21:14:36.218779 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.219024 kubelet[2590]: E0320 21:14:36.219012 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.219024 kubelet[2590]: W0320 21:14:36.219024 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.219121 kubelet[2590]: E0320 21:14:36.219077 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.219193 kubelet[2590]: E0320 21:14:36.219183 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.219193 kubelet[2590]: W0320 21:14:36.219193 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.219294 kubelet[2590]: E0320 21:14:36.219262 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.219352 kubelet[2590]: E0320 21:14:36.219342 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.219352 kubelet[2590]: W0320 21:14:36.219352 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.219431 kubelet[2590]: E0320 21:14:36.219365 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.219531 kubelet[2590]: E0320 21:14:36.219520 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.219531 kubelet[2590]: W0320 21:14:36.219530 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.219597 kubelet[2590]: E0320 21:14:36.219542 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.219725 kubelet[2590]: E0320 21:14:36.219715 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.219725 kubelet[2590]: W0320 21:14:36.219725 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.219790 kubelet[2590]: E0320 21:14:36.219737 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.219865 kubelet[2590]: E0320 21:14:36.219855 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.219865 kubelet[2590]: W0320 21:14:36.219864 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.219913 kubelet[2590]: E0320 21:14:36.219876 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.220147 kubelet[2590]: E0320 21:14:36.220131 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.220202 kubelet[2590]: W0320 21:14:36.220148 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.220202 kubelet[2590]: E0320 21:14:36.220164 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.220626 kubelet[2590]: E0320 21:14:36.220516 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.220626 kubelet[2590]: W0320 21:14:36.220531 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.220626 kubelet[2590]: E0320 21:14:36.220549 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.220877 kubelet[2590]: E0320 21:14:36.220809 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.220877 kubelet[2590]: W0320 21:14:36.220821 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.220936 kubelet[2590]: E0320 21:14:36.220870 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.221247 kubelet[2590]: E0320 21:14:36.221233 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.221387 kubelet[2590]: W0320 21:14:36.221304 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.221387 kubelet[2590]: E0320 21:14:36.221326 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.221706 kubelet[2590]: E0320 21:14:36.221682 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.221845 kubelet[2590]: W0320 21:14:36.221758 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.221845 kubelet[2590]: E0320 21:14:36.221792 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.222369 kubelet[2590]: E0320 21:14:36.222354 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.222523 kubelet[2590]: W0320 21:14:36.222444 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.222523 kubelet[2590]: E0320 21:14:36.222481 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.223079 kubelet[2590]: E0320 21:14:36.222953 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.223079 kubelet[2590]: W0320 21:14:36.222967 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.223079 kubelet[2590]: E0320 21:14:36.222983 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.224108 kubelet[2590]: E0320 21:14:36.223914 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.224108 kubelet[2590]: W0320 21:14:36.223931 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.224108 kubelet[2590]: E0320 21:14:36.223943 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.226092 containerd[1474]: time="2025-03-20T21:14:36.226009895Z" level=info msg="connecting to shim f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede" address="unix:///run/containerd/s/8f765b2d92214d6645c94e7a09b387cfb657f3044ffdfea28bf05a4eb71bf9e7" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:36.231781 kubelet[2590]: E0320 21:14:36.231726 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:36.231781 kubelet[2590]: W0320 21:14:36.231765 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:36.231781 kubelet[2590]: E0320 21:14:36.231781 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:36.248219 systemd[1]: Started cri-containerd-f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede.scope - libcontainer container f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede. Mar 20 21:14:36.273533 containerd[1474]: time="2025-03-20T21:14:36.273436146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j8zj4,Uid:cebbf482-a8dd-459f-a959-e58da867deef,Namespace:calico-system,Attempt:0,} returns sandbox id \"f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede\"" Mar 20 21:14:38.236884 kubelet[2590]: E0320 21:14:38.236838 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnwc" podUID="26fc4c15-2d42-473b-98f6-70fe4b1ea3e3" Mar 20 21:14:38.433574 containerd[1474]: time="2025-03-20T21:14:38.433525460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:38.434443 containerd[1474]: time="2025-03-20T21:14:38.434300472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 20 21:14:38.435228 containerd[1474]: time="2025-03-20T21:14:38.435184326Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:38.436923 containerd[1474]: time="2025-03-20T21:14:38.436872433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:38.437619 containerd[1474]: time="2025-03-20T21:14:38.437560604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 2.285363473s" Mar 20 21:14:38.437619 containerd[1474]: time="2025-03-20T21:14:38.437590524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 20 21:14:38.438914 containerd[1474]: time="2025-03-20T21:14:38.438845624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 20 21:14:38.448545 containerd[1474]: time="2025-03-20T21:14:38.448467296Z" level=info msg="CreateContainer within sandbox \"70f353f9cc0f6faca23f842edd3dc4e05a068fedaeb7306f9f55c69607c0a5d3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 20 21:14:38.454364 containerd[1474]: time="2025-03-20T21:14:38.454309268Z" level=info msg="Container 10fca0570c9175e51eba45b5a9cb5e78b2b773cf9530aa7d4cf9752ea1f95b9e: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:38.460197 containerd[1474]: time="2025-03-20T21:14:38.460136400Z" level=info msg="CreateContainer within sandbox \"70f353f9cc0f6faca23f842edd3dc4e05a068fedaeb7306f9f55c69607c0a5d3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"10fca0570c9175e51eba45b5a9cb5e78b2b773cf9530aa7d4cf9752ea1f95b9e\"" Mar 20 21:14:38.460720 containerd[1474]: time="2025-03-20T21:14:38.460642968Z" level=info msg="StartContainer for \"10fca0570c9175e51eba45b5a9cb5e78b2b773cf9530aa7d4cf9752ea1f95b9e\"" Mar 20 21:14:38.462011 containerd[1474]: time="2025-03-20T21:14:38.461910388Z" level=info msg="connecting to shim 10fca0570c9175e51eba45b5a9cb5e78b2b773cf9530aa7d4cf9752ea1f95b9e" address="unix:///run/containerd/s/1fbd1ffc52ac1fab01f90b28926691ec3175593f5bd3c6b77319f58888f4c680" protocol=ttrpc version=3 Mar 20 21:14:38.488762 systemd[1]: Started cri-containerd-10fca0570c9175e51eba45b5a9cb5e78b2b773cf9530aa7d4cf9752ea1f95b9e.scope - libcontainer container 10fca0570c9175e51eba45b5a9cb5e78b2b773cf9530aa7d4cf9752ea1f95b9e. Mar 20 21:14:38.545832 containerd[1474]: time="2025-03-20T21:14:38.545793830Z" level=info msg="StartContainer for \"10fca0570c9175e51eba45b5a9cb5e78b2b773cf9530aa7d4cf9752ea1f95b9e\" returns successfully" Mar 20 21:14:39.303555 kubelet[2590]: I0320 21:14:39.303162 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bf57668c8-cxmkc" podStartSLOduration=2.016180848 podStartE2EDuration="4.303146188s" podCreationTimestamp="2025-03-20 21:14:35 +0000 UTC" firstStartedPulling="2025-03-20 21:14:36.151299355 +0000 UTC m=+13.997389454" lastFinishedPulling="2025-03-20 21:14:38.438264695 +0000 UTC m=+16.284354794" observedRunningTime="2025-03-20 21:14:39.302990906 +0000 UTC m=+17.149081045" watchObservedRunningTime="2025-03-20 21:14:39.303146188 +0000 UTC m=+17.149236287" Mar 20 21:14:39.335185 kubelet[2590]: E0320 21:14:39.335157 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.335185 kubelet[2590]: W0320 21:14:39.335179 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.335185 kubelet[2590]: E0320 21:14:39.335198 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.335389 kubelet[2590]: E0320 21:14:39.335358 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.335414 kubelet[2590]: W0320 21:14:39.335366 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.335414 kubelet[2590]: E0320 21:14:39.335408 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.335581 kubelet[2590]: E0320 21:14:39.335551 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.335581 kubelet[2590]: W0320 21:14:39.335559 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.335581 kubelet[2590]: E0320 21:14:39.335567 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.335728 kubelet[2590]: E0320 21:14:39.335704 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.335728 kubelet[2590]: W0320 21:14:39.335714 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.335800 kubelet[2590]: E0320 21:14:39.335729 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.335886 kubelet[2590]: E0320 21:14:39.335871 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.335958 kubelet[2590]: W0320 21:14:39.335887 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.335958 kubelet[2590]: E0320 21:14:39.335896 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.336062 kubelet[2590]: E0320 21:14:39.336035 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.336062 kubelet[2590]: W0320 21:14:39.336059 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.336129 kubelet[2590]: E0320 21:14:39.336067 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.336220 kubelet[2590]: E0320 21:14:39.336208 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.336220 kubelet[2590]: W0320 21:14:39.336219 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.336284 kubelet[2590]: E0320 21:14:39.336226 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.336386 kubelet[2590]: E0320 21:14:39.336371 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.336386 kubelet[2590]: W0320 21:14:39.336381 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.336443 kubelet[2590]: E0320 21:14:39.336389 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.336580 kubelet[2590]: E0320 21:14:39.336568 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.336580 kubelet[2590]: W0320 21:14:39.336579 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.336642 kubelet[2590]: E0320 21:14:39.336589 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.336744 kubelet[2590]: E0320 21:14:39.336732 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.336744 kubelet[2590]: W0320 21:14:39.336742 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.336806 kubelet[2590]: E0320 21:14:39.336749 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.336890 kubelet[2590]: E0320 21:14:39.336878 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.336890 kubelet[2590]: W0320 21:14:39.336890 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.336944 kubelet[2590]: E0320 21:14:39.336897 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.337043 kubelet[2590]: E0320 21:14:39.337029 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.337095 kubelet[2590]: W0320 21:14:39.337049 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.337095 kubelet[2590]: E0320 21:14:39.337057 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.337217 kubelet[2590]: E0320 21:14:39.337204 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.337217 kubelet[2590]: W0320 21:14:39.337214 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.337265 kubelet[2590]: E0320 21:14:39.337221 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.337379 kubelet[2590]: E0320 21:14:39.337364 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.337379 kubelet[2590]: W0320 21:14:39.337374 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.337451 kubelet[2590]: E0320 21:14:39.337383 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.337547 kubelet[2590]: E0320 21:14:39.337536 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.337547 kubelet[2590]: W0320 21:14:39.337546 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.337608 kubelet[2590]: E0320 21:14:39.337554 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.342898 kubelet[2590]: E0320 21:14:39.342880 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.342898 kubelet[2590]: W0320 21:14:39.342896 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.343010 kubelet[2590]: E0320 21:14:39.342910 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.343118 kubelet[2590]: E0320 21:14:39.343104 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.343118 kubelet[2590]: W0320 21:14:39.343115 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.343177 kubelet[2590]: E0320 21:14:39.343127 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.343437 kubelet[2590]: E0320 21:14:39.343415 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.343437 kubelet[2590]: W0320 21:14:39.343436 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.343491 kubelet[2590]: E0320 21:14:39.343449 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.343687 kubelet[2590]: E0320 21:14:39.343672 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.343687 kubelet[2590]: W0320 21:14:39.343684 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.343748 kubelet[2590]: E0320 21:14:39.343697 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.343866 kubelet[2590]: E0320 21:14:39.343854 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.343866 kubelet[2590]: W0320 21:14:39.343864 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.343915 kubelet[2590]: E0320 21:14:39.343876 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.344030 kubelet[2590]: E0320 21:14:39.344008 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.344030 kubelet[2590]: W0320 21:14:39.344019 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.344112 kubelet[2590]: E0320 21:14:39.344061 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.344274 kubelet[2590]: E0320 21:14:39.344259 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.344274 kubelet[2590]: W0320 21:14:39.344271 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.344321 kubelet[2590]: E0320 21:14:39.344296 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.344450 kubelet[2590]: E0320 21:14:39.344436 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.344450 kubelet[2590]: W0320 21:14:39.344448 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.344534 kubelet[2590]: E0320 21:14:39.344517 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.344623 kubelet[2590]: E0320 21:14:39.344610 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.344623 kubelet[2590]: W0320 21:14:39.344622 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.344679 kubelet[2590]: E0320 21:14:39.344636 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.344855 kubelet[2590]: E0320 21:14:39.344840 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.344855 kubelet[2590]: W0320 21:14:39.344852 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.344924 kubelet[2590]: E0320 21:14:39.344865 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.345048 kubelet[2590]: E0320 21:14:39.345031 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.345109 kubelet[2590]: W0320 21:14:39.345091 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.345140 kubelet[2590]: E0320 21:14:39.345108 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.345325 kubelet[2590]: E0320 21:14:39.345312 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.345325 kubelet[2590]: W0320 21:14:39.345322 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.345435 kubelet[2590]: E0320 21:14:39.345403 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.345540 kubelet[2590]: E0320 21:14:39.345529 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.345540 kubelet[2590]: W0320 21:14:39.345539 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.345593 kubelet[2590]: E0320 21:14:39.345552 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.345749 kubelet[2590]: E0320 21:14:39.345736 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.345777 kubelet[2590]: W0320 21:14:39.345750 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.345865 kubelet[2590]: E0320 21:14:39.345829 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.345926 kubelet[2590]: E0320 21:14:39.345912 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.345926 kubelet[2590]: W0320 21:14:39.345923 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.345985 kubelet[2590]: E0320 21:14:39.345937 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.346086 kubelet[2590]: E0320 21:14:39.346074 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.346086 kubelet[2590]: W0320 21:14:39.346085 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.346146 kubelet[2590]: E0320 21:14:39.346093 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.346267 kubelet[2590]: E0320 21:14:39.346253 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.346267 kubelet[2590]: W0320 21:14:39.346266 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.346310 kubelet[2590]: E0320 21:14:39.346274 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.347000 kubelet[2590]: E0320 21:14:39.346974 2590 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:14:39.347000 kubelet[2590]: W0320 21:14:39.346991 2590 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:14:39.347083 kubelet[2590]: E0320 21:14:39.347003 2590 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:14:39.572375 containerd[1474]: time="2025-03-20T21:14:39.572253965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:39.574530 containerd[1474]: time="2025-03-20T21:14:39.574468117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 20 21:14:39.577469 containerd[1474]: time="2025-03-20T21:14:39.577380800Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:39.580661 containerd[1474]: time="2025-03-20T21:14:39.580619768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:39.581582 containerd[1474]: time="2025-03-20T21:14:39.581539582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.142659958s" Mar 20 21:14:39.581758 containerd[1474]: time="2025-03-20T21:14:39.581584702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 20 21:14:39.585508 containerd[1474]: time="2025-03-20T21:14:39.585358958Z" level=info msg="CreateContainer within sandbox \"f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 20 21:14:39.594463 containerd[1474]: time="2025-03-20T21:14:39.594134208Z" level=info msg="Container 952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:39.605386 containerd[1474]: time="2025-03-20T21:14:39.605335093Z" level=info msg="CreateContainer within sandbox \"f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4\"" Mar 20 21:14:39.606033 containerd[1474]: time="2025-03-20T21:14:39.606001103Z" level=info msg="StartContainer for \"952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4\"" Mar 20 21:14:39.608417 containerd[1474]: time="2025-03-20T21:14:39.608379018Z" level=info msg="connecting to shim 952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4" address="unix:///run/containerd/s/8f765b2d92214d6645c94e7a09b387cfb657f3044ffdfea28bf05a4eb71bf9e7" protocol=ttrpc version=3 Mar 20 21:14:39.628223 systemd[1]: Started cri-containerd-952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4.scope - libcontainer container 952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4. Mar 20 21:14:39.670924 containerd[1474]: time="2025-03-20T21:14:39.670872382Z" level=info msg="StartContainer for \"952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4\" returns successfully" Mar 20 21:14:39.685382 systemd[1]: cri-containerd-952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4.scope: Deactivated successfully. Mar 20 21:14:39.685762 systemd[1]: cri-containerd-952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4.scope: Consumed 36ms CPU time, 7.8M memory peak, 6.2M written to disk. Mar 20 21:14:39.711981 containerd[1474]: time="2025-03-20T21:14:39.711939869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4\" id:\"952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4\" pid:3259 exited_at:{seconds:1742505279 nanos:703323941}" Mar 20 21:14:39.715126 containerd[1474]: time="2025-03-20T21:14:39.715032434Z" level=info msg="received exit event container_id:\"952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4\" id:\"952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4\" pid:3259 exited_at:{seconds:1742505279 nanos:703323941}" Mar 20 21:14:39.743913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-952d23b097fb744fcae240fe0826342bb8dd188920e8348974ff2b8a7ba79dd4-rootfs.mount: Deactivated successfully. Mar 20 21:14:40.236434 kubelet[2590]: E0320 21:14:40.236377 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnwc" podUID="26fc4c15-2d42-473b-98f6-70fe4b1ea3e3" Mar 20 21:14:40.296924 kubelet[2590]: I0320 21:14:40.296889 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:14:40.297077 containerd[1474]: time="2025-03-20T21:14:40.296888363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 20 21:14:42.236433 kubelet[2590]: E0320 21:14:42.236367 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnwc" podUID="26fc4c15-2d42-473b-98f6-70fe4b1ea3e3" Mar 20 21:14:43.320907 containerd[1474]: time="2025-03-20T21:14:43.320706527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:43.321764 containerd[1474]: time="2025-03-20T21:14:43.321555257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 20 21:14:43.322552 containerd[1474]: time="2025-03-20T21:14:43.322494147Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:43.324412 containerd[1474]: time="2025-03-20T21:14:43.324386169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:43.324970 containerd[1474]: time="2025-03-20T21:14:43.324943055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 3.028018971s" Mar 20 21:14:43.325565 containerd[1474]: time="2025-03-20T21:14:43.324973576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 20 21:14:43.327781 containerd[1474]: time="2025-03-20T21:14:43.327751167Z" level=info msg="CreateContainer within sandbox \"f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 20 21:14:43.358083 containerd[1474]: time="2025-03-20T21:14:43.358020193Z" level=info msg="Container ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:43.366549 containerd[1474]: time="2025-03-20T21:14:43.366500490Z" level=info msg="CreateContainer within sandbox \"f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1\"" Mar 20 21:14:43.366947 containerd[1474]: time="2025-03-20T21:14:43.366911174Z" level=info msg="StartContainer for \"ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1\"" Mar 20 21:14:43.368397 containerd[1474]: time="2025-03-20T21:14:43.368355111Z" level=info msg="connecting to shim ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1" address="unix:///run/containerd/s/8f765b2d92214d6645c94e7a09b387cfb657f3044ffdfea28bf05a4eb71bf9e7" protocol=ttrpc version=3 Mar 20 21:14:43.386187 systemd[1]: Started cri-containerd-ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1.scope - libcontainer container ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1. Mar 20 21:14:43.454897 containerd[1474]: time="2025-03-20T21:14:43.454772977Z" level=info msg="StartContainer for \"ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1\" returns successfully" Mar 20 21:14:44.008772 systemd[1]: cri-containerd-ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1.scope: Deactivated successfully. Mar 20 21:14:44.009288 systemd[1]: cri-containerd-ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1.scope: Consumed 437ms CPU time, 161.9M memory peak, 4K read from disk, 150.3M written to disk. Mar 20 21:14:44.010586 containerd[1474]: time="2025-03-20T21:14:44.010510873Z" level=info msg="received exit event container_id:\"ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1\" id:\"ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1\" pid:3317 exited_at:{seconds:1742505284 nanos:10328351}" Mar 20 21:14:44.011388 containerd[1474]: time="2025-03-20T21:14:44.011344682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1\" id:\"ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1\" pid:3317 exited_at:{seconds:1742505284 nanos:10328351}" Mar 20 21:14:44.031217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea7b40cf773808ad6e4bf3fed4d15c51199be9ce739358cbd03dbc83eec423a1-rootfs.mount: Deactivated successfully. Mar 20 21:14:44.068953 kubelet[2590]: I0320 21:14:44.068896 2590 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 20 21:14:44.106228 systemd[1]: Created slice kubepods-besteffort-pod852df1ab_2ff7_4b02_8296_73136073fcdf.slice - libcontainer container kubepods-besteffort-pod852df1ab_2ff7_4b02_8296_73136073fcdf.slice. Mar 20 21:14:44.115006 systemd[1]: Created slice kubepods-burstable-poda699dba3_4b2d_4104_9920_964a9f5304db.slice - libcontainer container kubepods-burstable-poda699dba3_4b2d_4104_9920_964a9f5304db.slice. Mar 20 21:14:44.121887 systemd[1]: Created slice kubepods-burstable-pod7a9e66cb_1550_4150_8ab9_6b7e2aed0f11.slice - libcontainer container kubepods-burstable-pod7a9e66cb_1550_4150_8ab9_6b7e2aed0f11.slice. Mar 20 21:14:44.129609 systemd[1]: Created slice kubepods-besteffort-pod15b04dc4_a0e7_4342_99a6_1a37e831dc89.slice - libcontainer container kubepods-besteffort-pod15b04dc4_a0e7_4342_99a6_1a37e831dc89.slice. Mar 20 21:14:44.135141 systemd[1]: Created slice kubepods-besteffort-pod30a576f2_3c8d_4b57_b79a_05c94b6d990a.slice - libcontainer container kubepods-besteffort-pod30a576f2_3c8d_4b57_b79a_05c94b6d990a.slice. Mar 20 21:14:44.174253 kubelet[2590]: I0320 21:14:44.174201 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8r7p\" (UniqueName: \"kubernetes.io/projected/7a9e66cb-1550-4150-8ab9-6b7e2aed0f11-kube-api-access-t8r7p\") pod \"coredns-668d6bf9bc-vk4r7\" (UID: \"7a9e66cb-1550-4150-8ab9-6b7e2aed0f11\") " pod="kube-system/coredns-668d6bf9bc-vk4r7" Mar 20 21:14:44.174253 kubelet[2590]: I0320 21:14:44.174249 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdzrj\" (UniqueName: \"kubernetes.io/projected/30a576f2-3c8d-4b57-b79a-05c94b6d990a-kube-api-access-hdzrj\") pod \"calico-apiserver-6db8fc9dd5-666rh\" (UID: \"30a576f2-3c8d-4b57-b79a-05c94b6d990a\") " pod="calico-apiserver/calico-apiserver-6db8fc9dd5-666rh" Mar 20 21:14:44.174417 kubelet[2590]: I0320 21:14:44.174268 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tg5p\" (UniqueName: \"kubernetes.io/projected/852df1ab-2ff7-4b02-8296-73136073fcdf-kube-api-access-9tg5p\") pod \"calico-kube-controllers-5fcdc99dbb-gwg2n\" (UID: \"852df1ab-2ff7-4b02-8296-73136073fcdf\") " pod="calico-system/calico-kube-controllers-5fcdc99dbb-gwg2n" Mar 20 21:14:44.174417 kubelet[2590]: I0320 21:14:44.174347 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9e66cb-1550-4150-8ab9-6b7e2aed0f11-config-volume\") pod \"coredns-668d6bf9bc-vk4r7\" (UID: \"7a9e66cb-1550-4150-8ab9-6b7e2aed0f11\") " pod="kube-system/coredns-668d6bf9bc-vk4r7" Mar 20 21:14:44.174417 kubelet[2590]: I0320 21:14:44.174381 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvlmw\" (UniqueName: \"kubernetes.io/projected/a699dba3-4b2d-4104-9920-964a9f5304db-kube-api-access-hvlmw\") pod \"coredns-668d6bf9bc-2kx7z\" (UID: \"a699dba3-4b2d-4104-9920-964a9f5304db\") " pod="kube-system/coredns-668d6bf9bc-2kx7z" Mar 20 21:14:44.174417 kubelet[2590]: I0320 21:14:44.174404 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/30a576f2-3c8d-4b57-b79a-05c94b6d990a-calico-apiserver-certs\") pod \"calico-apiserver-6db8fc9dd5-666rh\" (UID: \"30a576f2-3c8d-4b57-b79a-05c94b6d990a\") " pod="calico-apiserver/calico-apiserver-6db8fc9dd5-666rh" Mar 20 21:14:44.174509 kubelet[2590]: I0320 21:14:44.174422 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15b04dc4-a0e7-4342-99a6-1a37e831dc89-calico-apiserver-certs\") pod \"calico-apiserver-6db8fc9dd5-xtd55\" (UID: \"15b04dc4-a0e7-4342-99a6-1a37e831dc89\") " pod="calico-apiserver/calico-apiserver-6db8fc9dd5-xtd55" Mar 20 21:14:44.174509 kubelet[2590]: I0320 21:14:44.174446 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/852df1ab-2ff7-4b02-8296-73136073fcdf-tigera-ca-bundle\") pod \"calico-kube-controllers-5fcdc99dbb-gwg2n\" (UID: \"852df1ab-2ff7-4b02-8296-73136073fcdf\") " pod="calico-system/calico-kube-controllers-5fcdc99dbb-gwg2n" Mar 20 21:14:44.174509 kubelet[2590]: I0320 21:14:44.174466 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a699dba3-4b2d-4104-9920-964a9f5304db-config-volume\") pod \"coredns-668d6bf9bc-2kx7z\" (UID: \"a699dba3-4b2d-4104-9920-964a9f5304db\") " pod="kube-system/coredns-668d6bf9bc-2kx7z" Mar 20 21:14:44.174509 kubelet[2590]: I0320 21:14:44.174483 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84k4x\" (UniqueName: \"kubernetes.io/projected/15b04dc4-a0e7-4342-99a6-1a37e831dc89-kube-api-access-84k4x\") pod \"calico-apiserver-6db8fc9dd5-xtd55\" (UID: \"15b04dc4-a0e7-4342-99a6-1a37e831dc89\") " pod="calico-apiserver/calico-apiserver-6db8fc9dd5-xtd55" Mar 20 21:14:44.242166 systemd[1]: Created slice kubepods-besteffort-pod26fc4c15_2d42_473b_98f6_70fe4b1ea3e3.slice - libcontainer container kubepods-besteffort-pod26fc4c15_2d42_473b_98f6_70fe4b1ea3e3.slice. Mar 20 21:14:44.244138 containerd[1474]: time="2025-03-20T21:14:44.244109133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnwc,Uid:26fc4c15-2d42-473b-98f6-70fe4b1ea3e3,Namespace:calico-system,Attempt:0,}" Mar 20 21:14:44.316278 containerd[1474]: time="2025-03-20T21:14:44.314096642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 20 21:14:44.413277 containerd[1474]: time="2025-03-20T21:14:44.413238303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcdc99dbb-gwg2n,Uid:852df1ab-2ff7-4b02-8296-73136073fcdf,Namespace:calico-system,Attempt:0,}" Mar 20 21:14:44.420726 containerd[1474]: time="2025-03-20T21:14:44.420684942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2kx7z,Uid:a699dba3-4b2d-4104-9920-964a9f5304db,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:44.425996 containerd[1474]: time="2025-03-20T21:14:44.425964999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk4r7,Uid:7a9e66cb-1550-4150-8ab9-6b7e2aed0f11,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:44.435022 containerd[1474]: time="2025-03-20T21:14:44.434885774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-xtd55,Uid:15b04dc4-a0e7-4342-99a6-1a37e831dc89,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:14:44.438252 containerd[1474]: time="2025-03-20T21:14:44.438220370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-666rh,Uid:30a576f2-3c8d-4b57-b79a-05c94b6d990a,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:14:44.443840 containerd[1474]: time="2025-03-20T21:14:44.443735349Z" level=error msg="Failed to destroy network for sandbox \"264401ffebda0ab1efeab384f1fec73179299e21e0951c338cd841a926bc8232\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.447188 containerd[1474]: time="2025-03-20T21:14:44.447132225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnwc,Uid:26fc4c15-2d42-473b-98f6-70fe4b1ea3e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"264401ffebda0ab1efeab384f1fec73179299e21e0951c338cd841a926bc8232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.450229 kubelet[2590]: E0320 21:14:44.450170 2590 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264401ffebda0ab1efeab384f1fec73179299e21e0951c338cd841a926bc8232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.452948 kubelet[2590]: E0320 21:14:44.452907 2590 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264401ffebda0ab1efeab384f1fec73179299e21e0951c338cd841a926bc8232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppnwc" Mar 20 21:14:44.452948 kubelet[2590]: E0320 21:14:44.452951 2590 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264401ffebda0ab1efeab384f1fec73179299e21e0951c338cd841a926bc8232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppnwc" Mar 20 21:14:44.453074 kubelet[2590]: E0320 21:14:44.453011 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ppnwc_calico-system(26fc4c15-2d42-473b-98f6-70fe4b1ea3e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ppnwc_calico-system(26fc4c15-2d42-473b-98f6-70fe4b1ea3e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"264401ffebda0ab1efeab384f1fec73179299e21e0951c338cd841a926bc8232\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppnwc" podUID="26fc4c15-2d42-473b-98f6-70fe4b1ea3e3" Mar 20 21:14:44.499233 containerd[1474]: time="2025-03-20T21:14:44.499176742Z" level=error msg="Failed to destroy network for sandbox \"82de7398c30c203ebfd43b9208e40b76ebc0f574e56bcb5c9a70e5fef9e5bb82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.503932 containerd[1474]: time="2025-03-20T21:14:44.503883473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcdc99dbb-gwg2n,Uid:852df1ab-2ff7-4b02-8296-73136073fcdf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82de7398c30c203ebfd43b9208e40b76ebc0f574e56bcb5c9a70e5fef9e5bb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.504385 kubelet[2590]: E0320 21:14:44.504333 2590 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82de7398c30c203ebfd43b9208e40b76ebc0f574e56bcb5c9a70e5fef9e5bb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.504457 kubelet[2590]: E0320 21:14:44.504404 2590 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82de7398c30c203ebfd43b9208e40b76ebc0f574e56bcb5c9a70e5fef9e5bb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fcdc99dbb-gwg2n" Mar 20 21:14:44.504457 kubelet[2590]: E0320 21:14:44.504425 2590 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82de7398c30c203ebfd43b9208e40b76ebc0f574e56bcb5c9a70e5fef9e5bb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fcdc99dbb-gwg2n" Mar 20 21:14:44.504511 kubelet[2590]: E0320 21:14:44.504468 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fcdc99dbb-gwg2n_calico-system(852df1ab-2ff7-4b02-8296-73136073fcdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fcdc99dbb-gwg2n_calico-system(852df1ab-2ff7-4b02-8296-73136073fcdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82de7398c30c203ebfd43b9208e40b76ebc0f574e56bcb5c9a70e5fef9e5bb82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fcdc99dbb-gwg2n" podUID="852df1ab-2ff7-4b02-8296-73136073fcdf" Mar 20 21:14:44.511134 containerd[1474]: time="2025-03-20T21:14:44.511090750Z" level=error msg="Failed to destroy network for sandbox \"54be1cfb226f34bbe6d042af89975415ca71abcb94b61914caee64444b7a5193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.511979 containerd[1474]: time="2025-03-20T21:14:44.511943839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-xtd55,Uid:15b04dc4-a0e7-4342-99a6-1a37e831dc89,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54be1cfb226f34bbe6d042af89975415ca71abcb94b61914caee64444b7a5193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.512253 kubelet[2590]: E0320 21:14:44.512217 2590 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54be1cfb226f34bbe6d042af89975415ca71abcb94b61914caee64444b7a5193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.512339 kubelet[2590]: E0320 21:14:44.512269 2590 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54be1cfb226f34bbe6d042af89975415ca71abcb94b61914caee64444b7a5193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-xtd55" Mar 20 21:14:44.512339 kubelet[2590]: E0320 21:14:44.512291 2590 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54be1cfb226f34bbe6d042af89975415ca71abcb94b61914caee64444b7a5193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-xtd55" Mar 20 21:14:44.512410 kubelet[2590]: E0320 21:14:44.512329 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db8fc9dd5-xtd55_calico-apiserver(15b04dc4-a0e7-4342-99a6-1a37e831dc89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db8fc9dd5-xtd55_calico-apiserver(15b04dc4-a0e7-4342-99a6-1a37e831dc89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54be1cfb226f34bbe6d042af89975415ca71abcb94b61914caee64444b7a5193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-xtd55" podUID="15b04dc4-a0e7-4342-99a6-1a37e831dc89" Mar 20 21:14:44.516739 containerd[1474]: time="2025-03-20T21:14:44.516695290Z" level=error msg="Failed to destroy network for sandbox \"fed3d39afd0e4c1c4e30ee274a2d14ed18e036ce7eabd66709a1ec28ebeb1094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.517422 containerd[1474]: time="2025-03-20T21:14:44.517390857Z" level=error msg="Failed to destroy network for sandbox \"e3e5685daffeeaf65844341ba08da4a96f0097cacf73095e9d7687f93797f59f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.522917 containerd[1474]: time="2025-03-20T21:14:44.522888316Z" level=error msg="Failed to destroy network for sandbox \"c2f78fab98f25a594de9ab47c6fa12a4d5634fa17cfe0635aed9ddc1e2a5dcac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.527009 containerd[1474]: time="2025-03-20T21:14:44.526924879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2kx7z,Uid:a699dba3-4b2d-4104-9920-964a9f5304db,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3d39afd0e4c1c4e30ee274a2d14ed18e036ce7eabd66709a1ec28ebeb1094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.527210 kubelet[2590]: E0320 21:14:44.527146 2590 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3d39afd0e4c1c4e30ee274a2d14ed18e036ce7eabd66709a1ec28ebeb1094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.527263 kubelet[2590]: E0320 21:14:44.527224 2590 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3d39afd0e4c1c4e30ee274a2d14ed18e036ce7eabd66709a1ec28ebeb1094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2kx7z" Mar 20 21:14:44.527289 kubelet[2590]: E0320 21:14:44.527253 2590 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3d39afd0e4c1c4e30ee274a2d14ed18e036ce7eabd66709a1ec28ebeb1094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2kx7z" Mar 20 21:14:44.527412 kubelet[2590]: E0320 21:14:44.527330 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2kx7z_kube-system(a699dba3-4b2d-4104-9920-964a9f5304db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2kx7z_kube-system(a699dba3-4b2d-4104-9920-964a9f5304db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fed3d39afd0e4c1c4e30ee274a2d14ed18e036ce7eabd66709a1ec28ebeb1094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2kx7z" podUID="a699dba3-4b2d-4104-9920-964a9f5304db" Mar 20 21:14:44.527764 containerd[1474]: time="2025-03-20T21:14:44.527727928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk4r7,Uid:7a9e66cb-1550-4150-8ab9-6b7e2aed0f11,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e5685daffeeaf65844341ba08da4a96f0097cacf73095e9d7687f93797f59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.528120 kubelet[2590]: E0320 21:14:44.527941 2590 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e5685daffeeaf65844341ba08da4a96f0097cacf73095e9d7687f93797f59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.528120 kubelet[2590]: E0320 21:14:44.527987 2590 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e5685daffeeaf65844341ba08da4a96f0097cacf73095e9d7687f93797f59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vk4r7" Mar 20 21:14:44.528120 kubelet[2590]: E0320 21:14:44.528003 2590 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e5685daffeeaf65844341ba08da4a96f0097cacf73095e9d7687f93797f59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vk4r7" Mar 20 21:14:44.528272 kubelet[2590]: E0320 21:14:44.528057 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vk4r7_kube-system(7a9e66cb-1550-4150-8ab9-6b7e2aed0f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vk4r7_kube-system(7a9e66cb-1550-4150-8ab9-6b7e2aed0f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3e5685daffeeaf65844341ba08da4a96f0097cacf73095e9d7687f93797f59f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vk4r7" podUID="7a9e66cb-1550-4150-8ab9-6b7e2aed0f11" Mar 20 21:14:44.528442 containerd[1474]: time="2025-03-20T21:14:44.528404535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-666rh,Uid:30a576f2-3c8d-4b57-b79a-05c94b6d990a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f78fab98f25a594de9ab47c6fa12a4d5634fa17cfe0635aed9ddc1e2a5dcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.528611 kubelet[2590]: E0320 21:14:44.528584 2590 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f78fab98f25a594de9ab47c6fa12a4d5634fa17cfe0635aed9ddc1e2a5dcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:14:44.528648 kubelet[2590]: E0320 21:14:44.528623 2590 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f78fab98f25a594de9ab47c6fa12a4d5634fa17cfe0635aed9ddc1e2a5dcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-666rh" Mar 20 21:14:44.528677 kubelet[2590]: E0320 21:14:44.528663 2590 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f78fab98f25a594de9ab47c6fa12a4d5634fa17cfe0635aed9ddc1e2a5dcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-666rh" Mar 20 21:14:44.528766 kubelet[2590]: E0320 21:14:44.528700 2590 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db8fc9dd5-666rh_calico-apiserver(30a576f2-3c8d-4b57-b79a-05c94b6d990a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db8fc9dd5-666rh_calico-apiserver(30a576f2-3c8d-4b57-b79a-05c94b6d990a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2f78fab98f25a594de9ab47c6fa12a4d5634fa17cfe0635aed9ddc1e2a5dcac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-666rh" podUID="30a576f2-3c8d-4b57-b79a-05c94b6d990a" Mar 20 21:14:45.361421 systemd[1]: run-netns-cni\x2dfcdbff80\x2d411c\x2d6e20\x2d54ff\x2d7c876647ac95.mount: Deactivated successfully. Mar 20 21:14:45.361523 systemd[1]: run-netns-cni\x2d222655db\x2dca76\x2d26b3\x2d6f76\x2d9b6c14c056f5.mount: Deactivated successfully. Mar 20 21:14:45.361570 systemd[1]: run-netns-cni\x2d573c9dbd\x2db67b\x2d55cf\x2d4e04\x2dc49ce225df7b.mount: Deactivated successfully. Mar 20 21:14:45.361615 systemd[1]: run-netns-cni\x2dc4023385\x2dec2b\x2dd51f\x2d2969\x2d893977e023e3.mount: Deactivated successfully. Mar 20 21:14:45.361672 systemd[1]: run-netns-cni\x2dcbbb8877\x2dcf22\x2d95a9\x2d89cf\x2d823b46688c06.mount: Deactivated successfully. Mar 20 21:14:46.944915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393466318.mount: Deactivated successfully. Mar 20 21:14:47.174031 containerd[1474]: time="2025-03-20T21:14:47.173978354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:47.174810 containerd[1474]: time="2025-03-20T21:14:47.174766161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 20 21:14:47.175701 containerd[1474]: time="2025-03-20T21:14:47.175679209Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:47.177808 containerd[1474]: time="2025-03-20T21:14:47.177773468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:47.178303 containerd[1474]: time="2025-03-20T21:14:47.178265272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 2.86413291s" Mar 20 21:14:47.178303 containerd[1474]: time="2025-03-20T21:14:47.178299753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 20 21:14:47.185837 containerd[1474]: time="2025-03-20T21:14:47.185797019Z" level=info msg="CreateContainer within sandbox \"f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 20 21:14:47.211251 containerd[1474]: time="2025-03-20T21:14:47.209941992Z" level=info msg="Container 84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:47.223181 containerd[1474]: time="2025-03-20T21:14:47.223130268Z" level=info msg="CreateContainer within sandbox \"f235db0c0c10b47f31c86395b7cf6f84c07d2b022a151848b0a94f033e29fede\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43\"" Mar 20 21:14:47.223776 containerd[1474]: time="2025-03-20T21:14:47.223691113Z" level=info msg="StartContainer for \"84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43\"" Mar 20 21:14:47.225972 containerd[1474]: time="2025-03-20T21:14:47.225940173Z" level=info msg="connecting to shim 84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43" address="unix:///run/containerd/s/8f765b2d92214d6645c94e7a09b387cfb657f3044ffdfea28bf05a4eb71bf9e7" protocol=ttrpc version=3 Mar 20 21:14:47.249284 systemd[1]: Started cri-containerd-84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43.scope - libcontainer container 84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43. Mar 20 21:14:47.283592 containerd[1474]: time="2025-03-20T21:14:47.283495280Z" level=info msg="StartContainer for \"84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43\" returns successfully" Mar 20 21:14:47.437425 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 20 21:14:47.437533 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 20 21:14:48.324217 kubelet[2590]: I0320 21:14:48.324186 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:14:50.822790 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:60938.service - OpenSSH per-connection server daemon (10.0.0.1:60938). Mar 20 21:14:50.888105 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 60938 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:14:50.890977 sshd-session[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:14:50.894998 systemd-logind[1460]: New session 8 of user core. Mar 20 21:14:50.905247 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 21:14:51.023419 sshd[3799]: Connection closed by 10.0.0.1 port 60938 Mar 20 21:14:51.023934 sshd-session[3780]: pam_unix(sshd:session): session closed for user core Mar 20 21:14:51.027469 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:60938.service: Deactivated successfully. Mar 20 21:14:51.030626 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 21:14:51.032018 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Mar 20 21:14:51.034454 systemd-logind[1460]: Removed session 8. Mar 20 21:14:55.237407 containerd[1474]: time="2025-03-20T21:14:55.237361890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-xtd55,Uid:15b04dc4-a0e7-4342-99a6-1a37e831dc89,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:14:55.466088 systemd-networkd[1394]: cali8f5c00f5f16: Link UP Mar 20 21:14:55.466793 systemd-networkd[1394]: cali8f5c00f5f16: Gained carrier Mar 20 21:14:55.479094 kubelet[2590]: I0320 21:14:55.478784 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j8zj4" podStartSLOduration=9.574885617 podStartE2EDuration="20.47876504s" podCreationTimestamp="2025-03-20 21:14:35 +0000 UTC" firstStartedPulling="2025-03-20 21:14:36.275055615 +0000 UTC m=+14.121145754" lastFinishedPulling="2025-03-20 21:14:47.178935118 +0000 UTC m=+25.025025177" observedRunningTime="2025-03-20 21:14:47.355522875 +0000 UTC m=+25.201612974" watchObservedRunningTime="2025-03-20 21:14:55.47876504 +0000 UTC m=+33.324855139" Mar 20 21:14:55.480386 containerd[1474]: 2025-03-20 21:14:55.264 [INFO][3917] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 20 21:14:55.480386 containerd[1474]: 2025-03-20 21:14:55.306 [INFO][3917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0 calico-apiserver-6db8fc9dd5- calico-apiserver 15b04dc4-a0e7-4342-99a6-1a37e831dc89 699 0 2025-03-20 21:14:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db8fc9dd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6db8fc9dd5-xtd55 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f5c00f5f16 [] []}} ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-" Mar 20 21:14:55.480386 containerd[1474]: 2025-03-20 21:14:55.306 [INFO][3917] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" Mar 20 21:14:55.480386 containerd[1474]: 2025-03-20 21:14:55.416 [INFO][3931] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" HandleID="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Workload="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.427 [INFO][3931] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" HandleID="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Workload="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000286660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6db8fc9dd5-xtd55", "timestamp":"2025-03-20 21:14:55.41611259 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.428 [INFO][3931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.428 [INFO][3931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.429 [INFO][3931] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.431 [INFO][3931] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" host="localhost" Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.436 [INFO][3931] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.439 [INFO][3931] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.441 [INFO][3931] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.443 [INFO][3931] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:55.480694 containerd[1474]: 2025-03-20 21:14:55.443 [INFO][3931] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" host="localhost" Mar 20 21:14:55.481007 containerd[1474]: 2025-03-20 21:14:55.444 [INFO][3931] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814 Mar 20 21:14:55.481007 containerd[1474]: 2025-03-20 21:14:55.448 [INFO][3931] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" host="localhost" Mar 20 21:14:55.481007 containerd[1474]: 2025-03-20 21:14:55.452 [INFO][3931] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" host="localhost" Mar 20 21:14:55.481007 containerd[1474]: 2025-03-20 21:14:55.452 [INFO][3931] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" host="localhost" Mar 20 21:14:55.481007 containerd[1474]: 2025-03-20 21:14:55.452 [INFO][3931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:14:55.481007 containerd[1474]: 2025-03-20 21:14:55.452 [INFO][3931] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" HandleID="k8s-pod-network.151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Workload="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" Mar 20 21:14:55.481226 containerd[1474]: 2025-03-20 21:14:55.454 [INFO][3917] cni-plugin/k8s.go 386: Populated endpoint ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0", GenerateName:"calico-apiserver-6db8fc9dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b04dc4-a0e7-4342-99a6-1a37e831dc89", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fc9dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6db8fc9dd5-xtd55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f5c00f5f16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:55.481528 containerd[1474]: 2025-03-20 21:14:55.454 [INFO][3917] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" Mar 20 21:14:55.481528 containerd[1474]: 2025-03-20 21:14:55.454 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f5c00f5f16 ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" Mar 20 21:14:55.481528 containerd[1474]: 2025-03-20 21:14:55.466 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" Mar 20 21:14:55.481621 containerd[1474]: 2025-03-20 21:14:55.467 [INFO][3917] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0", GenerateName:"calico-apiserver-6db8fc9dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b04dc4-a0e7-4342-99a6-1a37e831dc89", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fc9dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814", Pod:"calico-apiserver-6db8fc9dd5-xtd55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f5c00f5f16", MAC:"e2:0e:bc:ec:cc:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:55.481692 containerd[1474]: 2025-03-20 21:14:55.477 [INFO][3917] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-xtd55" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--xtd55-eth0" Mar 20 21:14:55.510537 containerd[1474]: time="2025-03-20T21:14:55.509999524Z" level=info msg="connecting to shim 151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814" address="unix:///run/containerd/s/87a3fe78846bc7cf64913bc5f31a06d8405aec18e02b43aada5523300f28dd4a" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:55.537194 systemd[1]: Started cri-containerd-151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814.scope - libcontainer container 151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814. Mar 20 21:14:55.551190 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:14:55.569481 containerd[1474]: time="2025-03-20T21:14:55.569437717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-xtd55,Uid:15b04dc4-a0e7-4342-99a6-1a37e831dc89,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814\"" Mar 20 21:14:55.576845 containerd[1474]: time="2025-03-20T21:14:55.576807795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 20 21:14:56.040764 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:37580.service - OpenSSH per-connection server daemon (10.0.0.1:37580). Mar 20 21:14:56.105503 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 37580 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:14:56.106352 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:14:56.115295 systemd-logind[1460]: New session 9 of user core. Mar 20 21:14:56.124229 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 21:14:56.256475 sshd[4022]: Connection closed by 10.0.0.1 port 37580 Mar 20 21:14:56.256876 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Mar 20 21:14:56.261162 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:37580.service: Deactivated successfully. Mar 20 21:14:56.262776 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 21:14:56.264050 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Mar 20 21:14:56.265185 systemd-logind[1460]: Removed session 9. Mar 20 21:14:56.850770 containerd[1474]: time="2025-03-20T21:14:56.850305378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:56.850770 containerd[1474]: time="2025-03-20T21:14:56.850708700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 20 21:14:56.851606 containerd[1474]: time="2025-03-20T21:14:56.851561184Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:56.853555 containerd[1474]: time="2025-03-20T21:14:56.853504274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:56.854221 containerd[1474]: time="2025-03-20T21:14:56.854099477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.277255121s" Mar 20 21:14:56.854221 containerd[1474]: time="2025-03-20T21:14:56.854131717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 20 21:14:56.855997 containerd[1474]: time="2025-03-20T21:14:56.855956366Z" level=info msg="CreateContainer within sandbox \"151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 21:14:56.862073 containerd[1474]: time="2025-03-20T21:14:56.861698394Z" level=info msg="Container 90433381bc34ec1ae15fa2bd574b9f78233df7af51bb46b8dff13591858e0eda: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:56.867738 containerd[1474]: time="2025-03-20T21:14:56.867691424Z" level=info msg="CreateContainer within sandbox \"151097dd223724a40673c2e31efc5a5cc26a53099cff296d4b04fc32026ae814\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90433381bc34ec1ae15fa2bd574b9f78233df7af51bb46b8dff13591858e0eda\"" Mar 20 21:14:56.868258 containerd[1474]: time="2025-03-20T21:14:56.868210266Z" level=info msg="StartContainer for \"90433381bc34ec1ae15fa2bd574b9f78233df7af51bb46b8dff13591858e0eda\"" Mar 20 21:14:56.869282 containerd[1474]: time="2025-03-20T21:14:56.869253472Z" level=info msg="connecting to shim 90433381bc34ec1ae15fa2bd574b9f78233df7af51bb46b8dff13591858e0eda" address="unix:///run/containerd/s/87a3fe78846bc7cf64913bc5f31a06d8405aec18e02b43aada5523300f28dd4a" protocol=ttrpc version=3 Mar 20 21:14:56.889245 systemd[1]: Started cri-containerd-90433381bc34ec1ae15fa2bd574b9f78233df7af51bb46b8dff13591858e0eda.scope - libcontainer container 90433381bc34ec1ae15fa2bd574b9f78233df7af51bb46b8dff13591858e0eda. Mar 20 21:14:57.081156 containerd[1474]: time="2025-03-20T21:14:57.081047251Z" level=info msg="StartContainer for \"90433381bc34ec1ae15fa2bd574b9f78233df7af51bb46b8dff13591858e0eda\" returns successfully" Mar 20 21:14:57.237030 containerd[1474]: time="2025-03-20T21:14:57.236980652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-666rh,Uid:30a576f2-3c8d-4b57-b79a-05c94b6d990a,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:14:57.372154 systemd-networkd[1394]: calibeb12ed3ea9: Link UP Mar 20 21:14:57.372350 systemd-networkd[1394]: calibeb12ed3ea9: Gained carrier Mar 20 21:14:57.382991 kubelet[2590]: I0320 21:14:57.382922 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-xtd55" podStartSLOduration=22.10433188 podStartE2EDuration="23.382902487s" podCreationTimestamp="2025-03-20 21:14:34 +0000 UTC" firstStartedPulling="2025-03-20 21:14:55.576268313 +0000 UTC m=+33.422358412" lastFinishedPulling="2025-03-20 21:14:56.85483892 +0000 UTC m=+34.700929019" observedRunningTime="2025-03-20 21:14:57.361436068 +0000 UTC m=+35.207526167" watchObservedRunningTime="2025-03-20 21:14:57.382902487 +0000 UTC m=+35.228992586" Mar 20 21:14:57.385395 containerd[1474]: 2025-03-20 21:14:57.270 [INFO][4102] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 20 21:14:57.385395 containerd[1474]: 2025-03-20 21:14:57.283 [INFO][4102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0 calico-apiserver-6db8fc9dd5- calico-apiserver 30a576f2-3c8d-4b57-b79a-05c94b6d990a 698 0 2025-03-20 21:14:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db8fc9dd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6db8fc9dd5-666rh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibeb12ed3ea9 [] []}} ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-" Mar 20 21:14:57.385395 containerd[1474]: 2025-03-20 21:14:57.284 [INFO][4102] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" Mar 20 21:14:57.385395 containerd[1474]: 2025-03-20 21:14:57.311 [INFO][4116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" HandleID="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Workload="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.339 [INFO][4116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" HandleID="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Workload="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400061fd60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6db8fc9dd5-666rh", "timestamp":"2025-03-20 21:14:57.311701838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.339 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.339 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.339 [INFO][4116] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.341 [INFO][4116] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" host="localhost" Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.344 [INFO][4116] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.349 [INFO][4116] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.350 [INFO][4116] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.353 [INFO][4116] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:57.385593 containerd[1474]: 2025-03-20 21:14:57.353 [INFO][4116] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" host="localhost" Mar 20 21:14:57.385793 containerd[1474]: 2025-03-20 21:14:57.354 [INFO][4116] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a Mar 20 21:14:57.385793 containerd[1474]: 2025-03-20 21:14:57.357 [INFO][4116] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" host="localhost" Mar 20 21:14:57.385793 containerd[1474]: 2025-03-20 21:14:57.364 [INFO][4116] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" host="localhost" Mar 20 21:14:57.385793 containerd[1474]: 2025-03-20 21:14:57.364 [INFO][4116] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" host="localhost" Mar 20 21:14:57.385793 containerd[1474]: 2025-03-20 21:14:57.364 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:14:57.385793 containerd[1474]: 2025-03-20 21:14:57.364 [INFO][4116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" HandleID="k8s-pod-network.006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Workload="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" Mar 20 21:14:57.385994 containerd[1474]: 2025-03-20 21:14:57.368 [INFO][4102] cni-plugin/k8s.go 386: Populated endpoint ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0", GenerateName:"calico-apiserver-6db8fc9dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"30a576f2-3c8d-4b57-b79a-05c94b6d990a", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fc9dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6db8fc9dd5-666rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeb12ed3ea9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:57.386081 containerd[1474]: 2025-03-20 21:14:57.368 [INFO][4102] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" Mar 20 21:14:57.386081 containerd[1474]: 2025-03-20 21:14:57.368 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeb12ed3ea9 ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" Mar 20 21:14:57.386081 containerd[1474]: 2025-03-20 21:14:57.373 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" Mar 20 21:14:57.386154 containerd[1474]: 2025-03-20 21:14:57.373 [INFO][4102] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0", GenerateName:"calico-apiserver-6db8fc9dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"30a576f2-3c8d-4b57-b79a-05c94b6d990a", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fc9dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a", Pod:"calico-apiserver-6db8fc9dd5-666rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeb12ed3ea9", MAC:"7a:28:59:a2:6e:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:57.386205 containerd[1474]: 2025-03-20 21:14:57.382 [INFO][4102] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fc9dd5-666rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6db8fc9dd5--666rh-eth0" Mar 20 21:14:57.408237 systemd-networkd[1394]: cali8f5c00f5f16: Gained IPv6LL Mar 20 21:14:57.418659 containerd[1474]: time="2025-03-20T21:14:57.418616492Z" level=info msg="connecting to shim 006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a" address="unix:///run/containerd/s/acccfb985ffadeb87acce199af6f8d7959683ae448b0b9e3635aa52560ce6caa" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:57.446237 systemd[1]: Started cri-containerd-006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a.scope - libcontainer container 006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a. Mar 20 21:14:57.458116 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:14:57.505477 containerd[1474]: time="2025-03-20T21:14:57.505351173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fc9dd5-666rh,Uid:30a576f2-3c8d-4b57-b79a-05c94b6d990a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a\"" Mar 20 21:14:57.510899 containerd[1474]: time="2025-03-20T21:14:57.510856559Z" level=info msg="CreateContainer within sandbox \"006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 21:14:57.520852 containerd[1474]: time="2025-03-20T21:14:57.520791845Z" level=info msg="Container 8f5191798537f192af51f1c7aed9fa14f82d1dc01e41aa2cdae544186715ff39: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:57.535217 containerd[1474]: time="2025-03-20T21:14:57.535160871Z" level=info msg="CreateContainer within sandbox \"006001a4bf9fc83509b1c3f5d65774da28bfd1adbf429fb43c646fca64cac32a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8f5191798537f192af51f1c7aed9fa14f82d1dc01e41aa2cdae544186715ff39\"" Mar 20 21:14:57.536011 containerd[1474]: time="2025-03-20T21:14:57.535963155Z" level=info msg="StartContainer for \"8f5191798537f192af51f1c7aed9fa14f82d1dc01e41aa2cdae544186715ff39\"" Mar 20 21:14:57.537537 containerd[1474]: time="2025-03-20T21:14:57.537490322Z" level=info msg="connecting to shim 8f5191798537f192af51f1c7aed9fa14f82d1dc01e41aa2cdae544186715ff39" address="unix:///run/containerd/s/acccfb985ffadeb87acce199af6f8d7959683ae448b0b9e3635aa52560ce6caa" protocol=ttrpc version=3 Mar 20 21:14:57.558255 systemd[1]: Started cri-containerd-8f5191798537f192af51f1c7aed9fa14f82d1dc01e41aa2cdae544186715ff39.scope - libcontainer container 8f5191798537f192af51f1c7aed9fa14f82d1dc01e41aa2cdae544186715ff39. Mar 20 21:14:57.616988 containerd[1474]: time="2025-03-20T21:14:57.616944649Z" level=info msg="StartContainer for \"8f5191798537f192af51f1c7aed9fa14f82d1dc01e41aa2cdae544186715ff39\" returns successfully" Mar 20 21:14:58.238389 containerd[1474]: time="2025-03-20T21:14:58.238335934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcdc99dbb-gwg2n,Uid:852df1ab-2ff7-4b02-8296-73136073fcdf,Namespace:calico-system,Attempt:0,}" Mar 20 21:14:58.238769 containerd[1474]: time="2025-03-20T21:14:58.238503895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnwc,Uid:26fc4c15-2d42-473b-98f6-70fe4b1ea3e3,Namespace:calico-system,Attempt:0,}" Mar 20 21:14:58.238769 containerd[1474]: time="2025-03-20T21:14:58.238687536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2kx7z,Uid:a699dba3-4b2d-4104-9920-964a9f5304db,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:58.366054 kubelet[2590]: I0320 21:14:58.363672 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:14:58.383451 kubelet[2590]: I0320 21:14:58.383376 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6db8fc9dd5-666rh" podStartSLOduration=24.383239522 podStartE2EDuration="24.383239522s" podCreationTimestamp="2025-03-20 21:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:14:58.381009953 +0000 UTC m=+36.227100052" watchObservedRunningTime="2025-03-20 21:14:58.383239522 +0000 UTC m=+36.229329621" Mar 20 21:14:58.490707 systemd-networkd[1394]: cali73c9c3e940c: Link UP Mar 20 21:14:58.491440 systemd-networkd[1394]: cali73c9c3e940c: Gained carrier Mar 20 21:14:58.505138 containerd[1474]: 2025-03-20 21:14:58.280 [INFO][4239] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 20 21:14:58.505138 containerd[1474]: 2025-03-20 21:14:58.309 [INFO][4239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0 calico-kube-controllers-5fcdc99dbb- calico-system 852df1ab-2ff7-4b02-8296-73136073fcdf 696 0 2025-03-20 21:14:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fcdc99dbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5fcdc99dbb-gwg2n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73c9c3e940c [] []}} ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-" Mar 20 21:14:58.505138 containerd[1474]: 2025-03-20 21:14:58.309 [INFO][4239] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" Mar 20 21:14:58.505138 containerd[1474]: 2025-03-20 21:14:58.348 [INFO][4282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" HandleID="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Workload="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.372 [INFO][4282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" HandleID="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Workload="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003656c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5fcdc99dbb-gwg2n", "timestamp":"2025-03-20 21:14:58.348205891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.372 [INFO][4282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.372 [INFO][4282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.372 [INFO][4282] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.375 [INFO][4282] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" host="localhost" Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.466 [INFO][4282] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.471 [INFO][4282] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.472 [INFO][4282] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.475 [INFO][4282] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:58.505342 containerd[1474]: 2025-03-20 21:14:58.475 [INFO][4282] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" host="localhost" Mar 20 21:14:58.505607 containerd[1474]: 2025-03-20 21:14:58.478 [INFO][4282] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438 Mar 20 21:14:58.505607 containerd[1474]: 2025-03-20 21:14:58.481 [INFO][4282] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" host="localhost" Mar 20 21:14:58.505607 containerd[1474]: 2025-03-20 21:14:58.486 [INFO][4282] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" host="localhost" Mar 20 21:14:58.505607 containerd[1474]: 2025-03-20 21:14:58.486 [INFO][4282] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" host="localhost" Mar 20 21:14:58.505607 containerd[1474]: 2025-03-20 21:14:58.486 [INFO][4282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:14:58.505607 containerd[1474]: 2025-03-20 21:14:58.486 [INFO][4282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" HandleID="k8s-pod-network.5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Workload="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" Mar 20 21:14:58.505735 containerd[1474]: 2025-03-20 21:14:58.488 [INFO][4239] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0", GenerateName:"calico-kube-controllers-5fcdc99dbb-", Namespace:"calico-system", SelfLink:"", UID:"852df1ab-2ff7-4b02-8296-73136073fcdf", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcdc99dbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5fcdc99dbb-gwg2n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73c9c3e940c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:58.505798 containerd[1474]: 2025-03-20 21:14:58.488 [INFO][4239] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" Mar 20 21:14:58.505798 containerd[1474]: 2025-03-20 21:14:58.488 [INFO][4239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73c9c3e940c ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" Mar 20 21:14:58.505798 containerd[1474]: 2025-03-20 21:14:58.491 [INFO][4239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" Mar 20 21:14:58.505865 containerd[1474]: 2025-03-20 21:14:58.491 [INFO][4239] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0", GenerateName:"calico-kube-controllers-5fcdc99dbb-", Namespace:"calico-system", SelfLink:"", UID:"852df1ab-2ff7-4b02-8296-73136073fcdf", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcdc99dbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438", Pod:"calico-kube-controllers-5fcdc99dbb-gwg2n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73c9c3e940c", MAC:"fe:db:79:73:45:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:58.505915 containerd[1474]: 2025-03-20 21:14:58.501 [INFO][4239] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" Namespace="calico-system" Pod="calico-kube-controllers-5fcdc99dbb-gwg2n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcdc99dbb--gwg2n-eth0" Mar 20 21:14:58.533009 containerd[1474]: time="2025-03-20T21:14:58.532961971Z" level=info msg="connecting to shim 5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438" address="unix:///run/containerd/s/e98d8fa2bde278c9089fafa47a40d279267a8adbbc24016777871b3dec600ef9" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:58.558204 systemd[1]: Started cri-containerd-5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438.scope - libcontainer container 5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438. Mar 20 21:14:58.572743 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:14:58.607633 systemd-networkd[1394]: cali4927b13ba84: Link UP Mar 20 21:14:58.607814 systemd-networkd[1394]: cali4927b13ba84: Gained carrier Mar 20 21:14:58.613897 containerd[1474]: time="2025-03-20T21:14:58.613855242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcdc99dbb-gwg2n,Uid:852df1ab-2ff7-4b02-8296-73136073fcdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438\"" Mar 20 21:14:58.620459 containerd[1474]: time="2025-03-20T21:14:58.620411311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 20 21:14:58.622146 containerd[1474]: 2025-03-20 21:14:58.301 [INFO][4265] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 20 21:14:58.622146 containerd[1474]: 2025-03-20 21:14:58.320 [INFO][4265] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0 coredns-668d6bf9bc- kube-system a699dba3-4b2d-4104-9920-964a9f5304db 693 0 2025-03-20 21:14:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2kx7z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4927b13ba84 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-" Mar 20 21:14:58.622146 containerd[1474]: 2025-03-20 21:14:58.320 [INFO][4265] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" Mar 20 21:14:58.622146 containerd[1474]: 2025-03-20 21:14:58.375 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" HandleID="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Workload="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.469 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" HandleID="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Workload="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c33a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2kx7z", "timestamp":"2025-03-20 21:14:58.375383888 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.469 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.486 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.486 [INFO][4295] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.490 [INFO][4295] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" host="localhost" Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.567 [INFO][4295] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.576 [INFO][4295] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.580 [INFO][4295] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.583 [INFO][4295] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:58.622317 containerd[1474]: 2025-03-20 21:14:58.583 [INFO][4295] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" host="localhost" Mar 20 21:14:58.622508 containerd[1474]: 2025-03-20 21:14:58.585 [INFO][4295] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d Mar 20 21:14:58.622508 containerd[1474]: 2025-03-20 21:14:58.591 [INFO][4295] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" host="localhost" Mar 20 21:14:58.622508 containerd[1474]: 2025-03-20 21:14:58.598 [INFO][4295] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" host="localhost" Mar 20 21:14:58.622508 containerd[1474]: 2025-03-20 21:14:58.598 [INFO][4295] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" host="localhost" Mar 20 21:14:58.622508 containerd[1474]: 2025-03-20 21:14:58.598 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:14:58.622508 containerd[1474]: 2025-03-20 21:14:58.598 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" HandleID="k8s-pod-network.40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Workload="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" Mar 20 21:14:58.622618 containerd[1474]: 2025-03-20 21:14:58.603 [INFO][4265] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a699dba3-4b2d-4104-9920-964a9f5304db", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2kx7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4927b13ba84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:58.622667 containerd[1474]: 2025-03-20 21:14:58.603 [INFO][4265] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" Mar 20 21:14:58.622667 containerd[1474]: 2025-03-20 21:14:58.603 [INFO][4265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4927b13ba84 ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" Mar 20 21:14:58.622667 containerd[1474]: 2025-03-20 21:14:58.608 [INFO][4265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" Mar 20 21:14:58.622729 containerd[1474]: 2025-03-20 21:14:58.608 [INFO][4265] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a699dba3-4b2d-4104-9920-964a9f5304db", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d", Pod:"coredns-668d6bf9bc-2kx7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4927b13ba84", MAC:"7a:15:4a:63:f5:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:58.622729 containerd[1474]: 2025-03-20 21:14:58.618 [INFO][4265] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2kx7z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2kx7z-eth0" Mar 20 21:14:58.646762 containerd[1474]: time="2025-03-20T21:14:58.646713625Z" level=info msg="connecting to shim 40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d" address="unix:///run/containerd/s/f4b6e08577f424fbc418a577e366dce07fac01f8c8054465827c6d48e5fadb8f" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:58.687203 systemd[1]: Started cri-containerd-40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d.scope - libcontainer container 40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d. Mar 20 21:14:58.696052 systemd-networkd[1394]: cali955fd6e4fd1: Link UP Mar 20 21:14:58.698799 systemd-networkd[1394]: cali955fd6e4fd1: Gained carrier Mar 20 21:14:58.708539 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.297 [INFO][4246] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.319 [INFO][4246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ppnwc-eth0 csi-node-driver- calico-system 26fc4c15-2d42-473b-98f6-70fe4b1ea3e3 619 0 2025-03-20 21:14:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ppnwc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali955fd6e4fd1 [] []}} ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.319 [INFO][4246] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-eth0" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.367 [INFO][4289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" HandleID="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Workload="localhost-k8s-csi--node--driver--ppnwc-eth0" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.469 [INFO][4289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" HandleID="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Workload="localhost-k8s-csi--node--driver--ppnwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d6a50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ppnwc", "timestamp":"2025-03-20 21:14:58.367797495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.469 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.598 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.598 [INFO][4289] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.601 [INFO][4289] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.667 [INFO][4289] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.672 [INFO][4289] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.674 [INFO][4289] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.677 [INFO][4289] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.677 [INFO][4289] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.679 [INFO][4289] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.684 [INFO][4289] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.691 [INFO][4289] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.691 [INFO][4289] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" host="localhost" Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.691 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:14:58.717977 containerd[1474]: 2025-03-20 21:14:58.691 [INFO][4289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" HandleID="k8s-pod-network.d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Workload="localhost-k8s-csi--node--driver--ppnwc-eth0" Mar 20 21:14:58.718699 containerd[1474]: 2025-03-20 21:14:58.693 [INFO][4246] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ppnwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26fc4c15-2d42-473b-98f6-70fe4b1ea3e3", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ppnwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali955fd6e4fd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:58.718699 containerd[1474]: 2025-03-20 21:14:58.694 [INFO][4246] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-eth0" Mar 20 21:14:58.718699 containerd[1474]: 2025-03-20 21:14:58.694 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali955fd6e4fd1 ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-eth0" Mar 20 21:14:58.718699 containerd[1474]: 2025-03-20 21:14:58.698 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-eth0" Mar 20 21:14:58.718699 containerd[1474]: 2025-03-20 21:14:58.699 [INFO][4246] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ppnwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26fc4c15-2d42-473b-98f6-70fe4b1ea3e3", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e", Pod:"csi-node-driver-ppnwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali955fd6e4fd1", MAC:"f2:a1:f2:99:e3:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:58.718699 containerd[1474]: 2025-03-20 21:14:58.714 [INFO][4246] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" Namespace="calico-system" Pod="csi-node-driver-ppnwc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppnwc-eth0" Mar 20 21:14:58.735931 containerd[1474]: time="2025-03-20T21:14:58.735880211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2kx7z,Uid:a699dba3-4b2d-4104-9920-964a9f5304db,Namespace:kube-system,Attempt:0,} returns sandbox id \"40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d\"" Mar 20 21:14:58.739689 containerd[1474]: time="2025-03-20T21:14:58.739614627Z" level=info msg="CreateContainer within sandbox \"40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:14:58.743611 containerd[1474]: time="2025-03-20T21:14:58.743523324Z" level=info msg="connecting to shim d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e" address="unix:///run/containerd/s/52e41b0759b5846476c81f96fe6ec79c39a978a6b7cfb21bb203dc527d0ba43a" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:58.748864 containerd[1474]: time="2025-03-20T21:14:58.748824827Z" level=info msg="Container 1088761d89eeee7fae99cf12fb214df2a9677e6e89af76806af010faa1862f56: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:58.778178 systemd[1]: Started cri-containerd-d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e.scope - libcontainer container d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e. Mar 20 21:14:58.779551 containerd[1474]: time="2025-03-20T21:14:58.779507080Z" level=info msg="CreateContainer within sandbox \"40ff23cd431fdff75d740edc5dccba3a49381028269039faba0c7eb6851eda8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1088761d89eeee7fae99cf12fb214df2a9677e6e89af76806af010faa1862f56\"" Mar 20 21:14:58.779933 containerd[1474]: time="2025-03-20T21:14:58.779909522Z" level=info msg="StartContainer for \"1088761d89eeee7fae99cf12fb214df2a9677e6e89af76806af010faa1862f56\"" Mar 20 21:14:58.780733 containerd[1474]: time="2025-03-20T21:14:58.780700445Z" level=info msg="connecting to shim 1088761d89eeee7fae99cf12fb214df2a9677e6e89af76806af010faa1862f56" address="unix:///run/containerd/s/f4b6e08577f424fbc418a577e366dce07fac01f8c8054465827c6d48e5fadb8f" protocol=ttrpc version=3 Mar 20 21:14:58.791115 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:14:58.808182 systemd[1]: Started cri-containerd-1088761d89eeee7fae99cf12fb214df2a9677e6e89af76806af010faa1862f56.scope - libcontainer container 1088761d89eeee7fae99cf12fb214df2a9677e6e89af76806af010faa1862f56. Mar 20 21:14:58.814287 containerd[1474]: time="2025-03-20T21:14:58.814240191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnwc,Uid:26fc4c15-2d42-473b-98f6-70fe4b1ea3e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e\"" Mar 20 21:14:58.856075 containerd[1474]: time="2025-03-20T21:14:58.853825242Z" level=info msg="StartContainer for \"1088761d89eeee7fae99cf12fb214df2a9677e6e89af76806af010faa1862f56\" returns successfully" Mar 20 21:14:59.200452 systemd-networkd[1394]: calibeb12ed3ea9: Gained IPv6LL Mar 20 21:14:59.237961 containerd[1474]: time="2025-03-20T21:14:59.237363081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk4r7,Uid:7a9e66cb-1550-4150-8ab9-6b7e2aed0f11,Namespace:kube-system,Attempt:0,}" Mar 20 21:14:59.373293 kubelet[2590]: I0320 21:14:59.372807 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:14:59.386424 kubelet[2590]: I0320 21:14:59.386237 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2kx7z" podStartSLOduration=30.386218526 podStartE2EDuration="30.386218526s" podCreationTimestamp="2025-03-20 21:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:14:59.384148518 +0000 UTC m=+37.230238617" watchObservedRunningTime="2025-03-20 21:14:59.386218526 +0000 UTC m=+37.232308625" Mar 20 21:14:59.430905 systemd-networkd[1394]: calif010aa9cf7b: Link UP Mar 20 21:14:59.433135 systemd-networkd[1394]: calif010aa9cf7b: Gained carrier Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.289 [INFO][4525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.315 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0 coredns-668d6bf9bc- kube-system 7a9e66cb-1550-4150-8ab9-6b7e2aed0f11 697 0 2025-03-20 21:14:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vk4r7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif010aa9cf7b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.315 [INFO][4525] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.353 [INFO][4555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" HandleID="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Workload="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.366 [INFO][4555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" HandleID="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Workload="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f8710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vk4r7", "timestamp":"2025-03-20 21:14:59.353787154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.366 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.366 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.366 [INFO][4555] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.369 [INFO][4555] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.375 [INFO][4555] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.384 [INFO][4555] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.391 [INFO][4555] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.395 [INFO][4555] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.395 [INFO][4555] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.398 [INFO][4555] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8 Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.410 [INFO][4555] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.423 [INFO][4555] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.423 [INFO][4555] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" host="localhost" Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.423 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:14:59.455560 containerd[1474]: 2025-03-20 21:14:59.423 [INFO][4555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" HandleID="k8s-pod-network.52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Workload="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" Mar 20 21:14:59.456810 containerd[1474]: 2025-03-20 21:14:59.426 [INFO][4525] cni-plugin/k8s.go 386: Populated endpoint ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a9e66cb-1550-4150-8ab9-6b7e2aed0f11", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vk4r7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif010aa9cf7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:59.456810 containerd[1474]: 2025-03-20 21:14:59.426 [INFO][4525] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" Mar 20 21:14:59.456810 containerd[1474]: 2025-03-20 21:14:59.426 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif010aa9cf7b ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" Mar 20 21:14:59.456810 containerd[1474]: 2025-03-20 21:14:59.433 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" Mar 20 21:14:59.456810 containerd[1474]: 2025-03-20 21:14:59.436 [INFO][4525] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a9e66cb-1550-4150-8ab9-6b7e2aed0f11", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8", Pod:"coredns-668d6bf9bc-vk4r7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif010aa9cf7b", MAC:"36:e7:79:eb:1f:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:14:59.456810 containerd[1474]: 2025-03-20 21:14:59.450 [INFO][4525] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk4r7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk4r7-eth0" Mar 20 21:14:59.502566 containerd[1474]: time="2025-03-20T21:14:59.502502799Z" level=info msg="connecting to shim 52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8" address="unix:///run/containerd/s/2d344ce4c1d6c4a73a5d1d84ce8bb7e49df4b0372f66cf40e7f2e2443c818ca3" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:14:59.540361 systemd[1]: Started cri-containerd-52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8.scope - libcontainer container 52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8. Mar 20 21:14:59.555688 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:14:59.580437 containerd[1474]: time="2025-03-20T21:14:59.580273475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk4r7,Uid:7a9e66cb-1550-4150-8ab9-6b7e2aed0f11,Namespace:kube-system,Attempt:0,} returns sandbox id \"52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8\"" Mar 20 21:14:59.583500 containerd[1474]: time="2025-03-20T21:14:59.583467088Z" level=info msg="CreateContainer within sandbox \"52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:14:59.596085 containerd[1474]: time="2025-03-20T21:14:59.595914778Z" level=info msg="Container 2776a07672121f6113c4ec0fbee1a7ec030a25f6f9546554f68323bdfcf0be08: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:59.642152 containerd[1474]: time="2025-03-20T21:14:59.642111486Z" level=info msg="CreateContainer within sandbox \"52b48e156857bf3d44b94afa69c7f94c591c9446f14347509eb46c70f2443db8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2776a07672121f6113c4ec0fbee1a7ec030a25f6f9546554f68323bdfcf0be08\"" Mar 20 21:14:59.643531 containerd[1474]: time="2025-03-20T21:14:59.643481972Z" level=info msg="StartContainer for \"2776a07672121f6113c4ec0fbee1a7ec030a25f6f9546554f68323bdfcf0be08\"" Mar 20 21:14:59.645862 containerd[1474]: time="2025-03-20T21:14:59.644706057Z" level=info msg="connecting to shim 2776a07672121f6113c4ec0fbee1a7ec030a25f6f9546554f68323bdfcf0be08" address="unix:///run/containerd/s/2d344ce4c1d6c4a73a5d1d84ce8bb7e49df4b0372f66cf40e7f2e2443c818ca3" protocol=ttrpc version=3 Mar 20 21:14:59.669211 systemd[1]: Started cri-containerd-2776a07672121f6113c4ec0fbee1a7ec030a25f6f9546554f68323bdfcf0be08.scope - libcontainer container 2776a07672121f6113c4ec0fbee1a7ec030a25f6f9546554f68323bdfcf0be08. Mar 20 21:14:59.722580 containerd[1474]: time="2025-03-20T21:14:59.721947451Z" level=info msg="StartContainer for \"2776a07672121f6113c4ec0fbee1a7ec030a25f6f9546554f68323bdfcf0be08\" returns successfully" Mar 20 21:14:59.776220 systemd-networkd[1394]: cali955fd6e4fd1: Gained IPv6LL Mar 20 21:14:59.935116 containerd[1474]: time="2025-03-20T21:14:59.935065357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:59.935808 containerd[1474]: time="2025-03-20T21:14:59.935762200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 20 21:14:59.936614 containerd[1474]: time="2025-03-20T21:14:59.936543723Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:59.939013 containerd[1474]: time="2025-03-20T21:14:59.938798572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:14:59.939437 containerd[1474]: time="2025-03-20T21:14:59.939409254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.318957383s" Mar 20 21:14:59.939501 containerd[1474]: time="2025-03-20T21:14:59.939439694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 20 21:14:59.940652 containerd[1474]: time="2025-03-20T21:14:59.940623899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 20 21:14:59.948739 containerd[1474]: time="2025-03-20T21:14:59.948637612Z" level=info msg="CreateContainer within sandbox \"5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 20 21:14:59.954141 containerd[1474]: time="2025-03-20T21:14:59.954107834Z" level=info msg="Container 1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:14:59.962431 containerd[1474]: time="2025-03-20T21:14:59.962380508Z" level=info msg="CreateContainer within sandbox \"5da578b337a4771785bfb47af59e35efe6db227a7d4f639f49d9aa0a65f14438\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d\"" Mar 20 21:14:59.962887 containerd[1474]: time="2025-03-20T21:14:59.962858030Z" level=info msg="StartContainer for \"1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d\"" Mar 20 21:14:59.964094 containerd[1474]: time="2025-03-20T21:14:59.964028674Z" level=info msg="connecting to shim 1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d" address="unix:///run/containerd/s/e98d8fa2bde278c9089fafa47a40d279267a8adbbc24016777871b3dec600ef9" protocol=ttrpc version=3 Mar 20 21:14:59.968172 systemd-networkd[1394]: cali73c9c3e940c: Gained IPv6LL Mar 20 21:14:59.989216 systemd[1]: Started cri-containerd-1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d.scope - libcontainer container 1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d. Mar 20 21:15:00.030943 containerd[1474]: time="2025-03-20T21:15:00.030899899Z" level=info msg="StartContainer for \"1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d\" returns successfully" Mar 20 21:15:00.032924 systemd-networkd[1394]: cali4927b13ba84: Gained IPv6LL Mar 20 21:15:00.392802 kubelet[2590]: I0320 21:15:00.392648 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fcdc99dbb-gwg2n" podStartSLOduration=23.07262645 podStartE2EDuration="24.392630797s" podCreationTimestamp="2025-03-20 21:14:36 +0000 UTC" firstStartedPulling="2025-03-20 21:14:58.62016871 +0000 UTC m=+36.466258809" lastFinishedPulling="2025-03-20 21:14:59.940173097 +0000 UTC m=+37.786263156" observedRunningTime="2025-03-20 21:15:00.391737473 +0000 UTC m=+38.237827572" watchObservedRunningTime="2025-03-20 21:15:00.392630797 +0000 UTC m=+38.238720896" Mar 20 21:15:00.404078 kubelet[2590]: I0320 21:15:00.402808 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vk4r7" podStartSLOduration=31.402791676 podStartE2EDuration="31.402791676s" podCreationTimestamp="2025-03-20 21:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:15:00.402662555 +0000 UTC m=+38.248752654" watchObservedRunningTime="2025-03-20 21:15:00.402791676 +0000 UTC m=+38.248881775" Mar 20 21:15:00.472610 containerd[1474]: time="2025-03-20T21:15:00.472569661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d\" id:\"0ced4b9e696407cacfa3f6e82065751e12d24d5717f94794ae2f75019cca3345\" pid:4745 exited_at:{seconds:1742505300 nanos:471651658}" Mar 20 21:15:00.837515 kubelet[2590]: I0320 21:15:00.837334 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:15:00.932374 containerd[1474]: time="2025-03-20T21:15:00.932317253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:15:00.932879 containerd[1474]: time="2025-03-20T21:15:00.932826935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 20 21:15:00.933963 containerd[1474]: time="2025-03-20T21:15:00.933927659Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:15:00.936564 containerd[1474]: time="2025-03-20T21:15:00.936524269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:15:00.937323 containerd[1474]: time="2025-03-20T21:15:00.937226472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 996.571413ms" Mar 20 21:15:00.937323 containerd[1474]: time="2025-03-20T21:15:00.937260912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 20 21:15:00.940629 containerd[1474]: time="2025-03-20T21:15:00.940290004Z" level=info msg="CreateContainer within sandbox \"d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 20 21:15:00.953705 containerd[1474]: time="2025-03-20T21:15:00.953640774Z" level=info msg="Container b628b25905c516c18281a1ee5a00a84ca72bc3ad8fdd821363e19666b073466c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:15:00.956559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774530199.mount: Deactivated successfully. Mar 20 21:15:00.981942 containerd[1474]: time="2025-03-20T21:15:00.981894522Z" level=info msg="CreateContainer within sandbox \"d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b628b25905c516c18281a1ee5a00a84ca72bc3ad8fdd821363e19666b073466c\"" Mar 20 21:15:00.982860 containerd[1474]: time="2025-03-20T21:15:00.982824366Z" level=info msg="StartContainer for \"b628b25905c516c18281a1ee5a00a84ca72bc3ad8fdd821363e19666b073466c\"" Mar 20 21:15:00.984552 containerd[1474]: time="2025-03-20T21:15:00.984526012Z" level=info msg="connecting to shim b628b25905c516c18281a1ee5a00a84ca72bc3ad8fdd821363e19666b073466c" address="unix:///run/containerd/s/52e41b0759b5846476c81f96fe6ec79c39a978a6b7cfb21bb203dc527d0ba43a" protocol=ttrpc version=3 Mar 20 21:15:01.015210 systemd[1]: Started cri-containerd-b628b25905c516c18281a1ee5a00a84ca72bc3ad8fdd821363e19666b073466c.scope - libcontainer container b628b25905c516c18281a1ee5a00a84ca72bc3ad8fdd821363e19666b073466c. Mar 20 21:15:01.051154 containerd[1474]: time="2025-03-20T21:15:01.051018454Z" level=info msg="StartContainer for \"b628b25905c516c18281a1ee5a00a84ca72bc3ad8fdd821363e19666b073466c\" returns successfully" Mar 20 21:15:01.053409 containerd[1474]: time="2025-03-20T21:15:01.053372462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 20 21:15:01.248185 systemd-networkd[1394]: calif010aa9cf7b: Gained IPv6LL Mar 20 21:15:01.278703 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:37582.service - OpenSSH per-connection server daemon (10.0.0.1:37582). Mar 20 21:15:01.349226 sshd[4797]: Accepted publickey for core from 10.0.0.1 port 37582 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:01.350824 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:01.356093 systemd-logind[1460]: New session 10 of user core. Mar 20 21:15:01.368502 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 21:15:01.552309 sshd[4807]: Connection closed by 10.0.0.1 port 37582 Mar 20 21:15:01.552776 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:01.561905 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:37582.service: Deactivated successfully. Mar 20 21:15:01.563625 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 21:15:01.564463 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Mar 20 21:15:01.566406 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:37592.service - OpenSSH per-connection server daemon (10.0.0.1:37592). Mar 20 21:15:01.567660 systemd-logind[1460]: Removed session 10. Mar 20 21:15:01.626979 sshd[4832]: Accepted publickey for core from 10.0.0.1 port 37592 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:01.628223 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:01.632141 systemd-logind[1460]: New session 11 of user core. Mar 20 21:15:01.642200 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 21:15:01.779754 kubelet[2590]: I0320 21:15:01.779708 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:15:01.889704 containerd[1474]: time="2025-03-20T21:15:01.889346968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43\" id:\"23a55b7f14a6667ebed41ba7b2b1b40ab150c117820f753d2038083a1f9d66b4\" pid:4858 exit_status:1 exited_at:{seconds:1742505301 nanos:887510882}" Mar 20 21:15:01.957933 containerd[1474]: time="2025-03-20T21:15:01.957842933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43\" id:\"a1c24c8c5e25506b33cc8f5bbb575a24bb2e1963dc442ef686d5a42530931c22\" pid:4883 exit_status:1 exited_at:{seconds:1742505301 nanos:957482371}" Mar 20 21:15:01.963601 sshd[4835]: Connection closed by 10.0.0.1 port 37592 Mar 20 21:15:01.964207 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:01.978113 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:37592.service: Deactivated successfully. Mar 20 21:15:01.983300 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 21:15:01.985201 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Mar 20 21:15:01.989167 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:37596.service - OpenSSH per-connection server daemon (10.0.0.1:37596). Mar 20 21:15:01.990115 systemd-logind[1460]: Removed session 11. Mar 20 21:15:02.053270 sshd[4898]: Accepted publickey for core from 10.0.0.1 port 37596 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:02.054591 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:02.059427 systemd-logind[1460]: New session 12 of user core. Mar 20 21:15:02.075199 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 21:15:02.250642 sshd[4901]: Connection closed by 10.0.0.1 port 37596 Mar 20 21:15:02.251609 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:02.255259 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:37596.service: Deactivated successfully. Mar 20 21:15:02.257535 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 21:15:02.258330 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Mar 20 21:15:02.259460 systemd-logind[1460]: Removed session 12. Mar 20 21:15:02.539786 containerd[1474]: time="2025-03-20T21:15:02.539682851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:15:02.541454 containerd[1474]: time="2025-03-20T21:15:02.541338776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 20 21:15:02.542261 containerd[1474]: time="2025-03-20T21:15:02.542230859Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:15:02.544053 containerd[1474]: time="2025-03-20T21:15:02.543998065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:15:02.544671 containerd[1474]: time="2025-03-20T21:15:02.544631387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.491220245s" Mar 20 21:15:02.544703 containerd[1474]: time="2025-03-20T21:15:02.544667228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 20 21:15:02.546555 containerd[1474]: time="2025-03-20T21:15:02.546519194Z" level=info msg="CreateContainer within sandbox \"d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 20 21:15:02.553080 kubelet[2590]: I0320 21:15:02.553051 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:15:02.554639 containerd[1474]: time="2025-03-20T21:15:02.554603541Z" level=info msg="Container c4bd8bc137cb4372cb39756a2ddbb1a150d962f5aa094355544ad76048b24e88: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:15:02.566846 containerd[1474]: time="2025-03-20T21:15:02.566784982Z" level=info msg="CreateContainer within sandbox \"d34485d8992701bc9946d0301630dd8d0fb355d7b49c25c34c1ad19aa1ca110e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c4bd8bc137cb4372cb39756a2ddbb1a150d962f5aa094355544ad76048b24e88\"" Mar 20 21:15:02.567663 containerd[1474]: time="2025-03-20T21:15:02.567569344Z" level=info msg="StartContainer for \"c4bd8bc137cb4372cb39756a2ddbb1a150d962f5aa094355544ad76048b24e88\"" Mar 20 21:15:02.571270 containerd[1474]: time="2025-03-20T21:15:02.571239597Z" level=info msg="connecting to shim c4bd8bc137cb4372cb39756a2ddbb1a150d962f5aa094355544ad76048b24e88" address="unix:///run/containerd/s/52e41b0759b5846476c81f96fe6ec79c39a978a6b7cfb21bb203dc527d0ba43a" protocol=ttrpc version=3 Mar 20 21:15:02.625295 systemd[1]: Started cri-containerd-c4bd8bc137cb4372cb39756a2ddbb1a150d962f5aa094355544ad76048b24e88.scope - libcontainer container c4bd8bc137cb4372cb39756a2ddbb1a150d962f5aa094355544ad76048b24e88. Mar 20 21:15:02.684722 containerd[1474]: time="2025-03-20T21:15:02.683136331Z" level=info msg="StartContainer for \"c4bd8bc137cb4372cb39756a2ddbb1a150d962f5aa094355544ad76048b24e88\" returns successfully" Mar 20 21:15:03.128089 kernel: bpftool[4995]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 20 21:15:03.297032 systemd-networkd[1394]: vxlan.calico: Link UP Mar 20 21:15:03.297423 systemd-networkd[1394]: vxlan.calico: Gained carrier Mar 20 21:15:03.314057 kubelet[2590]: I0320 21:15:03.313986 2590 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 20 21:15:03.316249 kubelet[2590]: I0320 21:15:03.316196 2590 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 20 21:15:03.407772 kubelet[2590]: I0320 21:15:03.407452 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ppnwc" podStartSLOduration=23.682035297 podStartE2EDuration="27.407436712s" podCreationTimestamp="2025-03-20 21:14:36 +0000 UTC" firstStartedPulling="2025-03-20 21:14:58.819875855 +0000 UTC m=+36.665965954" lastFinishedPulling="2025-03-20 21:15:02.54527727 +0000 UTC m=+40.391367369" observedRunningTime="2025-03-20 21:15:03.406252948 +0000 UTC m=+41.252343047" watchObservedRunningTime="2025-03-20 21:15:03.407436712 +0000 UTC m=+41.253526811" Mar 20 21:15:04.769216 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Mar 20 21:15:07.267382 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:42576.service - OpenSSH per-connection server daemon (10.0.0.1:42576). Mar 20 21:15:07.336102 sshd[5119]: Accepted publickey for core from 10.0.0.1 port 42576 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:07.337518 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:07.344147 systemd-logind[1460]: New session 13 of user core. Mar 20 21:15:07.354188 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 21:15:07.517705 sshd[5121]: Connection closed by 10.0.0.1 port 42576 Mar 20 21:15:07.517954 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:07.533294 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:42576.service: Deactivated successfully. Mar 20 21:15:07.534878 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 21:15:07.535605 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Mar 20 21:15:07.537423 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:42588.service - OpenSSH per-connection server daemon (10.0.0.1:42588). Mar 20 21:15:07.538309 systemd-logind[1460]: Removed session 13. Mar 20 21:15:07.589896 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 42588 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:07.591031 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:07.594893 systemd-logind[1460]: New session 14 of user core. Mar 20 21:15:07.609200 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 21:15:07.822426 sshd[5138]: Connection closed by 10.0.0.1 port 42588 Mar 20 21:15:07.823058 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:07.835277 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:42588.service: Deactivated successfully. Mar 20 21:15:07.838723 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 21:15:07.839380 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Mar 20 21:15:07.841155 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:42602.service - OpenSSH per-connection server daemon (10.0.0.1:42602). Mar 20 21:15:07.842407 systemd-logind[1460]: Removed session 14. Mar 20 21:15:07.902776 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 42602 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:07.904324 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:07.908749 systemd-logind[1460]: New session 15 of user core. Mar 20 21:15:07.922182 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 21:15:08.647898 sshd[5152]: Connection closed by 10.0.0.1 port 42602 Mar 20 21:15:08.649470 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:08.663779 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:42604.service - OpenSSH per-connection server daemon (10.0.0.1:42604). Mar 20 21:15:08.664743 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:42602.service: Deactivated successfully. Mar 20 21:15:08.667921 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 21:15:08.669613 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Mar 20 21:15:08.672946 systemd-logind[1460]: Removed session 15. Mar 20 21:15:08.721033 sshd[5177]: Accepted publickey for core from 10.0.0.1 port 42604 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:08.722176 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:08.726209 systemd-logind[1460]: New session 16 of user core. Mar 20 21:15:08.737221 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 21:15:09.046684 sshd[5182]: Connection closed by 10.0.0.1 port 42604 Mar 20 21:15:09.047601 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:09.058670 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:42604.service: Deactivated successfully. Mar 20 21:15:09.060597 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 21:15:09.061397 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Mar 20 21:15:09.063970 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). Mar 20 21:15:09.065575 systemd-logind[1460]: Removed session 16. Mar 20 21:15:09.115539 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:09.117624 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:09.123230 systemd-logind[1460]: New session 17 of user core. Mar 20 21:15:09.131208 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 21:15:09.276974 sshd[5196]: Connection closed by 10.0.0.1 port 42610 Mar 20 21:15:09.276823 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:09.280126 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:42610.service: Deactivated successfully. Mar 20 21:15:09.282372 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 21:15:09.283623 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Mar 20 21:15:09.284595 systemd-logind[1460]: Removed session 17. Mar 20 21:15:14.288887 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:47742.service - OpenSSH per-connection server daemon (10.0.0.1:47742). Mar 20 21:15:14.344245 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 47742 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:14.345433 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:14.349550 systemd-logind[1460]: New session 18 of user core. Mar 20 21:15:14.361203 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 21:15:14.501692 sshd[5220]: Connection closed by 10.0.0.1 port 47742 Mar 20 21:15:14.502572 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:14.508842 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:47742.service: Deactivated successfully. Mar 20 21:15:14.510679 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 21:15:14.511406 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Mar 20 21:15:14.512389 systemd-logind[1460]: Removed session 18. Mar 20 21:15:19.517024 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:47746.service - OpenSSH per-connection server daemon (10.0.0.1:47746). Mar 20 21:15:19.588103 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 47746 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:19.589473 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:19.593645 systemd-logind[1460]: New session 19 of user core. Mar 20 21:15:19.603203 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 21:15:19.759305 sshd[5239]: Connection closed by 10.0.0.1 port 47746 Mar 20 21:15:19.760008 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:19.763506 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:47746.service: Deactivated successfully. Mar 20 21:15:19.765273 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 21:15:19.765917 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Mar 20 21:15:19.766800 systemd-logind[1460]: Removed session 19. Mar 20 21:15:20.463591 containerd[1474]: time="2025-03-20T21:15:20.463554624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d\" id:\"8e6e5457ad6d334df1bfae9f5145294a3aa3964639e96e8c631cc841b8dd5380\" pid:5263 exited_at:{seconds:1742505320 nanos:463308224}" Mar 20 21:15:24.774859 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:43404.service - OpenSSH per-connection server daemon (10.0.0.1:43404). Mar 20 21:15:24.826126 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 43404 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:24.827367 sshd-session[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:24.833103 systemd-logind[1460]: New session 20 of user core. Mar 20 21:15:24.841214 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 21:15:25.006602 sshd[5286]: Connection closed by 10.0.0.1 port 43404 Mar 20 21:15:25.007335 sshd-session[5284]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:25.012163 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:43404.service: Deactivated successfully. Mar 20 21:15:25.014801 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 21:15:25.015835 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Mar 20 21:15:25.016873 systemd-logind[1460]: Removed session 20. Mar 20 21:15:30.019803 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:43418.service - OpenSSH per-connection server daemon (10.0.0.1:43418). Mar 20 21:15:30.085585 sshd[5302]: Accepted publickey for core from 10.0.0.1 port 43418 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:15:30.086852 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:15:30.090746 systemd-logind[1460]: New session 21 of user core. Mar 20 21:15:30.100274 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 21:15:30.245178 sshd[5304]: Connection closed by 10.0.0.1 port 43418 Mar 20 21:15:30.245639 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Mar 20 21:15:30.249826 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:43418.service: Deactivated successfully. Mar 20 21:15:30.252383 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 21:15:30.253186 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Mar 20 21:15:30.254035 systemd-logind[1460]: Removed session 21. Mar 20 21:15:30.415878 containerd[1474]: time="2025-03-20T21:15:30.415740947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d2656ff17438b62d1e960491190ed006243b36c7b300c13f318bb7aa8d8841d\" id:\"42dd0b1ea4ee2379095000ffd5399b4920741b89171024304a94f239d1e5c18e\" pid:5330 exited_at:{seconds:1742505330 nanos:415490008}" Mar 20 21:15:31.946679 containerd[1474]: time="2025-03-20T21:15:31.946632128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84519f09dfd91e0b75e94fb0fe8a46a914e0d13bd40c2be2ac19146dcc7e8f43\" id:\"21f5ea2c9c44c814b33e94adafc709585fbefc9cb39bf1225fd2b4fd3abd7dda\" pid:5351 exited_at:{seconds:1742505331 nanos:946364348}"