Feb 13 19:26:24.931211 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:26:24.931236 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:26:24.931247 kernel: KASLR enabled Feb 13 19:26:24.931253 kernel: efi: EFI v2.7 by EDK II Feb 13 19:26:24.931260 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:26:24.931266 kernel: random: crng init done Feb 13 19:26:24.931273 kernel: ACPI: Early table checksum verification disabled Feb 13 19:26:24.931280 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:26:24.931287 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:26:24.931295 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931301 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931308 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931314 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931320 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931328 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931337 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931344 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931350 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:26:24.931357 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:26:24.931364 kernel: NUMA: Failed to initialise from firmware Feb 13 19:26:24.931370 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:26:24.931377 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 19:26:24.931384 kernel: Zone ranges: Feb 13 19:26:24.931390 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:26:24.931397 kernel: DMA32 empty Feb 13 19:26:24.931404 kernel: Normal empty Feb 13 19:26:24.931411 kernel: Movable zone start for each node Feb 13 19:26:24.931417 kernel: Early memory node ranges Feb 13 19:26:24.931424 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:26:24.931430 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:26:24.931437 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:26:24.931443 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:26:24.931449 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:26:24.931457 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:26:24.931464 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:26:24.931470 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:26:24.931477 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:26:24.931485 kernel: psci: probing for conduit method from ACPI. Feb 13 19:26:24.931491 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:26:24.931498 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:26:24.931508 kernel: psci: Trusted OS migration not required Feb 13 19:26:24.931515 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:26:24.931522 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:26:24.931531 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:26:24.931538 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:26:24.931545 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:26:24.931552 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:26:24.931559 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:26:24.931566 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:26:24.931573 kernel: CPU features: detected: Spectre-v4 Feb 13 19:26:24.931580 kernel: CPU features: detected: Spectre-BHB Feb 13 19:26:24.931587 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:26:24.931594 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:26:24.931603 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:26:24.931610 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:26:24.931617 kernel: alternatives: applying boot alternatives Feb 13 19:26:24.931624 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:26:24.931632 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:26:24.931639 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:26:24.931645 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:26:24.931660 kernel: Fallback order for Node 0: 0 Feb 13 19:26:24.931668 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:26:24.931675 kernel: Policy zone: DMA Feb 13 19:26:24.931681 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:26:24.931690 kernel: software IO TLB: area num 4. Feb 13 19:26:24.931697 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:26:24.931705 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 19:26:24.931712 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:26:24.931719 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:26:24.931726 kernel: rcu: RCU event tracing is enabled. Feb 13 19:26:24.931733 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:26:24.931740 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:26:24.931747 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:26:24.931754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:26:24.931761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:26:24.931768 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:26:24.931777 kernel: GICv3: 256 SPIs implemented Feb 13 19:26:24.931784 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:26:24.931792 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:26:24.931799 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:26:24.931807 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:26:24.931814 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:26:24.931821 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:26:24.931829 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:26:24.931836 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:26:24.931843 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:26:24.931850 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:26:24.931859 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:26:24.931866 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:26:24.931873 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:26:24.931880 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:26:24.931887 kernel: arm-pv: using stolen time PV Feb 13 19:26:24.931894 kernel: Console: colour dummy device 80x25 Feb 13 19:26:24.931901 kernel: ACPI: Core revision 20230628 Feb 13 19:26:24.931909 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:26:24.931916 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:26:24.931923 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:26:24.931932 kernel: landlock: Up and running. Feb 13 19:26:24.931939 kernel: SELinux: Initializing. Feb 13 19:26:24.931946 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:26:24.931953 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:26:24.931961 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:26:24.931968 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:26:24.931984 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:26:24.931991 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:26:24.931998 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:26:24.932007 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:26:24.932014 kernel: Remapping and enabling EFI services. Feb 13 19:26:24.932021 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:26:24.932028 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:26:24.932035 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:26:24.932043 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:26:24.932050 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:26:24.932058 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:26:24.932065 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:26:24.932072 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:26:24.932081 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:26:24.932088 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:26:24.932100 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:26:24.932109 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:26:24.932117 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:26:24.932124 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:26:24.932132 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:26:24.932139 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:26:24.932147 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:26:24.932157 kernel: SMP: Total of 4 processors activated. Feb 13 19:26:24.932164 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:26:24.932172 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:26:24.932180 kernel: CPU features: detected: Common not Private translations Feb 13 19:26:24.932188 kernel: CPU features: detected: CRC32 instructions Feb 13 19:26:24.932195 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:26:24.932202 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:26:24.932210 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:26:24.932219 kernel: CPU features: detected: Privileged Access Never Feb 13 19:26:24.932227 kernel: CPU features: detected: RAS Extension Support Feb 13 19:26:24.932234 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:26:24.932242 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:26:24.932249 kernel: alternatives: applying system-wide alternatives Feb 13 19:26:24.932257 kernel: devtmpfs: initialized Feb 13 19:26:24.932265 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:26:24.932272 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:26:24.932280 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:26:24.932289 kernel: SMBIOS 3.0.0 present. Feb 13 19:26:24.932296 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:26:24.932304 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:26:24.932311 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:26:24.932319 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:26:24.932327 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:26:24.932335 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:26:24.932348 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Feb 13 19:26:24.932355 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:26:24.932365 kernel: cpuidle: using governor menu Feb 13 19:26:24.932373 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:26:24.932380 kernel: ASID allocator initialised with 32768 entries Feb 13 19:26:24.932388 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:26:24.932395 kernel: Serial: AMBA PL011 UART driver Feb 13 19:26:24.932402 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:26:24.932410 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:26:24.932417 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:26:24.932424 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:26:24.932433 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:26:24.932440 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:26:24.932448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:26:24.932455 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:26:24.932463 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:26:24.932470 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:26:24.932477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:26:24.932485 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:26:24.932492 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:26:24.932502 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:26:24.932509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:26:24.932517 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:26:24.932524 kernel: ACPI: Interpreter enabled Feb 13 19:26:24.932532 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:26:24.932539 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:26:24.932547 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:26:24.932555 kernel: printk: console [ttyAMA0] enabled Feb 13 19:26:24.932562 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:26:24.932710 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:26:24.932785 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:26:24.932852 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:26:24.932921 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:26:24.933007 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:26:24.933019 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:26:24.933026 kernel: PCI host bridge to bus 0000:00 Feb 13 19:26:24.933107 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:26:24.933169 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:26:24.933230 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:26:24.933289 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:26:24.933377 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:26:24.933492 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:26:24.933565 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:26:24.933636 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:26:24.933752 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:26:24.933825 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:26:24.933892 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:26:24.933960 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:26:24.934055 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:26:24.934125 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:26:24.934187 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:26:24.934197 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:26:24.934205 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:26:24.934213 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:26:24.934221 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:26:24.934229 kernel: iommu: Default domain type: Translated Feb 13 19:26:24.934237 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:26:24.934244 kernel: efivars: Registered efivars operations Feb 13 19:26:24.934254 kernel: vgaarb: loaded Feb 13 19:26:24.934262 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:26:24.934270 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:26:24.934278 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:26:24.934290 kernel: pnp: PnP ACPI init Feb 13 19:26:24.934373 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:26:24.934385 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:26:24.934395 kernel: NET: Registered PF_INET protocol family Feb 13 19:26:24.934408 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:26:24.934416 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:26:24.934425 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:26:24.934433 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:26:24.934441 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:26:24.934449 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:26:24.934457 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:26:24.934468 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:26:24.934476 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:26:24.934485 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:26:24.934493 kernel: kvm [1]: HYP mode not available Feb 13 19:26:24.934501 kernel: Initialise system trusted keyrings Feb 13 19:26:24.934508 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:26:24.934516 kernel: Key type asymmetric registered Feb 13 19:26:24.934524 kernel: Asymmetric key parser 'x509' registered Feb 13 19:26:24.934532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:26:24.934540 kernel: io scheduler mq-deadline registered Feb 13 19:26:24.934551 kernel: io scheduler kyber registered Feb 13 19:26:24.934565 kernel: io scheduler bfq registered Feb 13 19:26:24.934574 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:26:24.934582 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:26:24.934591 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:26:24.934673 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:26:24.934685 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:26:24.934692 kernel: thunder_xcv, ver 1.0 Feb 13 19:26:24.934700 kernel: thunder_bgx, ver 1.0 Feb 13 19:26:24.934707 kernel: nicpf, ver 1.0 Feb 13 19:26:24.934717 kernel: nicvf, ver 1.0 Feb 13 19:26:24.934800 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:26:24.934868 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:26:24 UTC (1739474784) Feb 13 19:26:24.934879 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:26:24.934886 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:26:24.934894 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:26:24.934902 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:26:24.934910 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:26:24.934920 kernel: Segment Routing with IPv6 Feb 13 19:26:24.934928 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:26:24.934936 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:26:24.934943 kernel: Key type dns_resolver registered Feb 13 19:26:24.934951 kernel: registered taskstats version 1 Feb 13 19:26:24.934959 kernel: Loading compiled-in X.509 certificates Feb 13 19:26:24.934967 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:26:24.934992 kernel: Key type .fscrypt registered Feb 13 19:26:24.935000 kernel: Key type fscrypt-provisioning registered Feb 13 19:26:24.935011 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:26:24.935019 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:26:24.935026 kernel: ima: No architecture policies found Feb 13 19:26:24.935034 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:26:24.935042 kernel: clk: Disabling unused clocks Feb 13 19:26:24.935049 kernel: Freeing unused kernel memory: 39360K Feb 13 19:26:24.935058 kernel: Run /init as init process Feb 13 19:26:24.935065 kernel: with arguments: Feb 13 19:26:24.935073 kernel: /init Feb 13 19:26:24.935082 kernel: with environment: Feb 13 19:26:24.935089 kernel: HOME=/ Feb 13 19:26:24.935097 kernel: TERM=linux Feb 13 19:26:24.935105 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:26:24.935114 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:26:24.935125 systemd[1]: Detected virtualization kvm. Feb 13 19:26:24.935133 systemd[1]: Detected architecture arm64. Feb 13 19:26:24.935142 systemd[1]: Running in initrd. Feb 13 19:26:24.935152 systemd[1]: No hostname configured, using default hostname. Feb 13 19:26:24.935160 systemd[1]: Hostname set to . Feb 13 19:26:24.935169 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:26:24.935178 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:26:24.935186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:26:24.935194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:26:24.935203 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:26:24.935213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:26:24.935222 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:26:24.935231 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:26:24.935240 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:26:24.935249 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:26:24.935257 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:26:24.935271 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:26:24.935286 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:26:24.935295 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:26:24.935304 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:26:24.935312 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:26:24.935321 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:26:24.935329 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:26:24.935338 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:26:24.935347 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:26:24.935355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:26:24.935366 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:26:24.935374 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:26:24.935382 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:26:24.935391 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:26:24.935399 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:26:24.935408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:26:24.935416 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:26:24.935425 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:26:24.935435 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:26:24.935443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:26:24.935451 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:26:24.935460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:26:24.935468 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:26:24.935477 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:26:24.935487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:26:24.935495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:26:24.935524 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 19:26:24.935547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:26:24.935557 systemd-journald[237]: Journal started Feb 13 19:26:24.935576 systemd-journald[237]: Runtime Journal (/run/log/journal/46d2f5a37e034824a7bd27fbafe25607) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:26:24.915278 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:26:24.939990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:26:24.940033 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:26:24.941811 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:26:24.943676 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:26:24.943698 kernel: Bridge firewalling registered Feb 13 19:26:24.943424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:26:24.951151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:26:24.953689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:26:24.954934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:26:24.956773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:26:24.959663 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:26:24.960642 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:26:24.964916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:26:24.967884 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:26:24.975337 dracut-cmdline[276]: dracut-dracut-053 Feb 13 19:26:24.978668 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:26:25.001958 systemd-resolved[281]: Positive Trust Anchors: Feb 13 19:26:25.001984 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:26:25.002016 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:26:25.007357 systemd-resolved[281]: Defaulting to hostname 'linux'. Feb 13 19:26:25.008922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:26:25.011201 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:26:25.057027 kernel: SCSI subsystem initialized Feb 13 19:26:25.064000 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:26:25.072015 kernel: iscsi: registered transport (tcp) Feb 13 19:26:25.087332 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:26:25.087363 kernel: QLogic iSCSI HBA Driver Feb 13 19:26:25.131674 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:26:25.144205 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:26:25.159158 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:26:25.160071 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:26:25.160084 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:26:25.206045 kernel: raid6: neonx8 gen() 15758 MB/s Feb 13 19:26:25.223013 kernel: raid6: neonx4 gen() 15640 MB/s Feb 13 19:26:25.240010 kernel: raid6: neonx2 gen() 13221 MB/s Feb 13 19:26:25.257013 kernel: raid6: neonx1 gen() 10437 MB/s Feb 13 19:26:25.274002 kernel: raid6: int64x8 gen() 6952 MB/s Feb 13 19:26:25.290995 kernel: raid6: int64x4 gen() 7287 MB/s Feb 13 19:26:25.308010 kernel: raid6: int64x2 gen() 6118 MB/s Feb 13 19:26:25.325006 kernel: raid6: int64x1 gen() 5047 MB/s Feb 13 19:26:25.325041 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Feb 13 19:26:25.342018 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Feb 13 19:26:25.342063 kernel: raid6: using neon recovery algorithm Feb 13 19:26:25.347144 kernel: xor: measuring software checksum speed Feb 13 19:26:25.347171 kernel: 8regs : 19322 MB/sec Feb 13 19:26:25.348185 kernel: 32regs : 19688 MB/sec Feb 13 19:26:25.348198 kernel: arm64_neon : 27070 MB/sec Feb 13 19:26:25.348208 kernel: xor: using function: arm64_neon (27070 MB/sec) Feb 13 19:26:25.404010 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:26:25.416594 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:26:25.422169 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:26:25.436512 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 19:26:25.439856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:26:25.452176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:26:25.469829 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 19:26:25.503169 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:26:25.516133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:26:25.564034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:26:25.572105 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:26:25.588842 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:26:25.591564 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:26:25.592867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:26:25.594111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:26:25.604154 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:26:25.612642 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:26:25.618141 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:26:25.618248 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:26:25.618267 kernel: GPT:9289727 != 19775487 Feb 13 19:26:25.618277 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:26:25.618286 kernel: GPT:9289727 != 19775487 Feb 13 19:26:25.618297 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:26:25.618306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:26:25.618659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:26:25.621630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:26:25.634949 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (521) Feb 13 19:26:25.621849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:26:25.625000 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:26:25.626024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:26:25.626152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:26:25.626968 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:26:25.640760 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (518) Feb 13 19:26:25.640276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:26:25.648232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:26:25.652751 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:26:25.653942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:26:25.668113 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:26:25.669066 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:26:25.674486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:26:25.688151 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:26:25.689740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:26:25.694222 disk-uuid[549]: Primary Header is updated. Feb 13 19:26:25.694222 disk-uuid[549]: Secondary Entries is updated. Feb 13 19:26:25.694222 disk-uuid[549]: Secondary Header is updated. Feb 13 19:26:25.697037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:26:25.710040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:26:25.710271 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:26:25.713993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:26:26.714994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:26:26.715688 disk-uuid[550]: The operation has completed successfully. Feb 13 19:26:26.744623 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:26:26.744732 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:26:26.759152 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:26:26.762334 sh[572]: Success Feb 13 19:26:26.777994 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:26:26.812677 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:26:26.824465 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:26:26.826146 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:26:26.837119 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:26:26.837167 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:26:26.837178 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:26:26.838040 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:26:26.839229 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:26:26.842636 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:26:26.843898 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:26:26.854279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:26:26.856536 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:26:26.865562 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:26:26.865609 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:26:26.865620 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:26:26.868083 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:26:26.877002 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:26:26.877030 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:26:26.885090 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:26:26.891139 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:26:26.954481 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:26:26.968180 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:26:26.991797 systemd-networkd[760]: lo: Link UP Feb 13 19:26:26.991809 systemd-networkd[760]: lo: Gained carrier Feb 13 19:26:26.992615 systemd-networkd[760]: Enumeration completed Feb 13 19:26:26.993128 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:26:26.993131 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:26:26.994045 systemd-networkd[760]: eth0: Link UP Feb 13 19:26:26.994049 systemd-networkd[760]: eth0: Gained carrier Feb 13 19:26:26.994056 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:26:26.995664 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:26:26.996726 systemd[1]: Reached target network.target - Network. Feb 13 19:26:27.010024 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:26:27.012604 ignition[667]: Ignition 2.19.0 Feb 13 19:26:27.012614 ignition[667]: Stage: fetch-offline Feb 13 19:26:27.012680 ignition[667]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:26:27.012692 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:26:27.012842 ignition[667]: parsed url from cmdline: "" Feb 13 19:26:27.012845 ignition[667]: no config URL provided Feb 13 19:26:27.012849 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:26:27.012856 ignition[667]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:26:27.012877 ignition[667]: op(1): [started] loading QEMU firmware config module Feb 13 19:26:27.012881 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:26:27.023649 ignition[667]: op(1): [finished] loading QEMU firmware config module Feb 13 19:26:27.030741 systemd-resolved[281]: Detected conflict on linux IN A 10.0.0.8 Feb 13 19:26:27.030757 systemd-resolved[281]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Feb 13 19:26:27.064997 ignition[667]: parsing config with SHA512: daccc2485165c6a28855b668f9cb4f831b972a3b8a08096c029fbacd2501ad2317ed417cb4c2cc3ed55fa0c2d7d82b90b199628fca7d0d43d641197b3ee6075c Feb 13 19:26:27.070507 unknown[667]: fetched base config from "system" Feb 13 19:26:27.071146 ignition[667]: fetch-offline: fetch-offline passed Feb 13 19:26:27.070524 unknown[667]: fetched user config from "qemu" Feb 13 19:26:27.071234 ignition[667]: Ignition finished successfully Feb 13 19:26:27.072770 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:26:27.075426 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:26:27.084185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:26:27.094626 ignition[771]: Ignition 2.19.0 Feb 13 19:26:27.094638 ignition[771]: Stage: kargs Feb 13 19:26:27.094817 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:26:27.094827 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:26:27.095753 ignition[771]: kargs: kargs passed Feb 13 19:26:27.095799 ignition[771]: Ignition finished successfully Feb 13 19:26:27.097934 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:26:27.108208 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:26:27.117524 ignition[780]: Ignition 2.19.0 Feb 13 19:26:27.117535 ignition[780]: Stage: disks Feb 13 19:26:27.117707 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:26:27.117716 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:26:27.119997 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:26:27.118663 ignition[780]: disks: disks passed Feb 13 19:26:27.121474 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:26:27.118708 ignition[780]: Ignition finished successfully Feb 13 19:26:27.122725 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:26:27.124829 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:26:27.125969 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:26:27.127453 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:26:27.138131 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:26:27.150063 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:26:27.154893 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:26:27.169083 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:26:27.214900 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:26:27.216342 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:26:27.216283 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:26:27.228074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:26:27.229933 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:26:27.231434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:26:27.231481 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:26:27.239189 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Feb 13 19:26:27.239213 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:26:27.239225 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:26:27.239235 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:26:27.231509 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:26:27.241447 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:26:27.238272 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:26:27.240942 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:26:27.243845 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:26:27.280089 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:26:27.284913 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:26:27.289272 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:26:27.293632 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:26:27.368351 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:26:27.377079 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:26:27.378523 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:26:27.383983 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:26:27.396590 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:26:27.400577 ignition[910]: INFO : Ignition 2.19.0 Feb 13 19:26:27.400577 ignition[910]: INFO : Stage: mount Feb 13 19:26:27.402963 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:26:27.402963 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:26:27.402963 ignition[910]: INFO : mount: mount passed Feb 13 19:26:27.402963 ignition[910]: INFO : Ignition finished successfully Feb 13 19:26:27.403396 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:26:27.415116 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:26:27.835969 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:26:27.848140 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:26:27.852995 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Feb 13 19:26:27.855415 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:26:27.855445 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:26:27.855455 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:26:27.856987 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:26:27.858316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:26:27.874072 ignition[941]: INFO : Ignition 2.19.0 Feb 13 19:26:27.874072 ignition[941]: INFO : Stage: files Feb 13 19:26:27.875452 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:26:27.875452 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:26:27.875452 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:26:27.878216 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:26:27.878216 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:26:27.878216 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:26:27.878216 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:26:27.878216 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:26:27.877936 unknown[941]: wrote ssh authorized keys file for user: core Feb 13 19:26:27.883609 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:26:27.883609 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:26:27.883609 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:26:27.883609 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:26:27.947403 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:26:28.335862 systemd-networkd[760]: eth0: Gained IPv6LL Feb 13 19:26:28.504876 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:26:28.504876 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:26:28.508264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:26:28.886070 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:26:29.152337 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:26:29.152337 ignition[941]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 19:26:29.155676 ignition[941]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:26:29.175462 ignition[941]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:26:29.179391 ignition[941]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:26:29.181687 ignition[941]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:26:29.181687 ignition[941]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:26:29.181687 ignition[941]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:26:29.181687 ignition[941]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:26:29.181687 ignition[941]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:26:29.181687 ignition[941]: INFO : files: files passed Feb 13 19:26:29.181687 ignition[941]: INFO : Ignition finished successfully Feb 13 19:26:29.182259 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:26:29.194166 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:26:29.195609 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:26:29.198323 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:26:29.198415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:26:29.203890 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:26:29.208059 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:26:29.208059 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:26:29.210752 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:26:29.210014 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:26:29.211981 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:26:29.225138 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:26:29.249329 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:26:29.250079 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:26:29.251408 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:26:29.252790 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:26:29.254482 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:26:29.267421 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:26:29.279867 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:26:29.282072 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:26:29.293670 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:26:29.294605 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:26:29.299597 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:26:29.300872 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:26:29.300993 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:26:29.302886 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:26:29.304411 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:26:29.305643 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:26:29.306939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:26:29.308434 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:26:29.309856 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:26:29.311297 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:26:29.312810 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:26:29.314240 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:26:29.315654 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:26:29.316832 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:26:29.316947 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:26:29.318891 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:26:29.320457 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:26:29.321961 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:26:29.322078 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:26:29.323695 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:26:29.323802 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:26:29.325910 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:26:29.326026 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:26:29.327494 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:26:29.328691 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:26:29.333620 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:26:29.334660 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:26:29.336236 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:26:29.337423 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:26:29.337510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:26:29.338685 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:26:29.338760 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:26:29.340005 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:26:29.340108 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:26:29.341443 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:26:29.341539 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:26:29.353246 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:26:29.354662 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:26:29.355693 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:26:29.355807 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:26:29.357285 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:26:29.357380 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:26:29.362639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:26:29.364260 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:26:29.372842 ignition[997]: INFO : Ignition 2.19.0 Feb 13 19:26:29.372842 ignition[997]: INFO : Stage: umount Feb 13 19:26:29.374244 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:26:29.374244 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:26:29.374244 ignition[997]: INFO : umount: umount passed Feb 13 19:26:29.374244 ignition[997]: INFO : Ignition finished successfully Feb 13 19:26:29.373909 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:26:29.376027 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:26:29.376125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:26:29.377640 systemd[1]: Stopped target network.target - Network. Feb 13 19:26:29.378435 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:26:29.378499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:26:29.379775 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:26:29.379816 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:26:29.381010 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:26:29.381050 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:26:29.382325 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:26:29.382367 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:26:29.383887 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:26:29.385363 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:26:29.390755 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:26:29.392022 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:26:29.393722 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:26:29.393787 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:26:29.395048 systemd-networkd[760]: eth0: DHCPv6 lease lost Feb 13 19:26:29.396846 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:26:29.396953 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:26:29.398061 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:26:29.398092 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:26:29.407069 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:26:29.407737 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:26:29.407795 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:26:29.409229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:26:29.409267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:26:29.410662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:26:29.410702 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:26:29.412424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:26:29.421548 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:26:29.421677 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:26:29.430080 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:26:29.430227 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:26:29.435486 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:26:29.435556 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:26:29.436433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:26:29.436463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:26:29.437837 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:26:29.437877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:26:29.439930 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:26:29.440001 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:26:29.441987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:26:29.442026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:26:29.452139 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:26:29.453000 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:26:29.453059 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:26:29.454839 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:26:29.454876 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:26:29.456363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:26:29.456399 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:26:29.458109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:26:29.458145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:26:29.459909 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:26:29.461273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:26:29.462176 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:26:29.462249 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:26:29.464322 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:26:29.465747 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:26:29.465800 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:26:29.467957 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:26:29.477336 systemd[1]: Switching root. Feb 13 19:26:29.502623 systemd-journald[237]: Journal stopped Feb 13 19:26:30.245357 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 19:26:30.245419 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:26:30.245435 kernel: SELinux: policy capability open_perms=1 Feb 13 19:26:30.245447 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:26:30.245459 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:26:30.245471 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:26:30.245482 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:26:30.245492 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:26:30.245501 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:26:30.245511 kernel: audit: type=1403 audit(1739474789.685:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:26:30.245521 systemd[1]: Successfully loaded SELinux policy in 33.182ms. Feb 13 19:26:30.245537 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.790ms. Feb 13 19:26:30.245551 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:26:30.245562 systemd[1]: Detected virtualization kvm. Feb 13 19:26:30.245572 systemd[1]: Detected architecture arm64. Feb 13 19:26:30.245582 systemd[1]: Detected first boot. Feb 13 19:26:30.245593 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:26:30.245603 zram_generator::config[1060]: No configuration found. Feb 13 19:26:30.245614 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:26:30.245632 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:26:30.245645 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:26:30.245656 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:26:30.245666 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:26:30.245677 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:26:30.245687 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:26:30.245698 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:26:30.245709 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:26:30.245720 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:26:30.245732 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:26:30.245742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:26:30.245753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:26:30.245763 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:26:30.245774 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:26:30.245784 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:26:30.245795 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:26:30.245805 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:26:30.245816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:26:30.245828 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:26:30.245838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:26:30.245848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:26:30.245859 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:26:30.245869 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:26:30.245880 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:26:30.245890 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:26:30.245900 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:26:30.245912 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:26:30.245924 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:26:30.245934 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:26:30.245945 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:26:30.245955 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:26:30.245965 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:26:30.246000 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:26:30.246012 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:26:30.246023 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:26:30.246033 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:26:30.246046 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:26:30.246057 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:26:30.246068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:26:30.246079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:26:30.246089 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:26:30.246100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:26:30.246111 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:26:30.246122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:26:30.246134 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:26:30.246144 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:26:30.246159 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:26:30.246170 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:26:30.246181 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:26:30.246191 kernel: fuse: init (API version 7.39) Feb 13 19:26:30.246201 kernel: loop: module loaded Feb 13 19:26:30.246210 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:26:30.246221 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:26:30.246233 kernel: ACPI: bus type drm_connector registered Feb 13 19:26:30.246242 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:26:30.246253 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:26:30.246263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:26:30.246274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:26:30.246284 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:26:30.246295 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:26:30.246311 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:26:30.246341 systemd-journald[1142]: Collecting audit messages is disabled. Feb 13 19:26:30.246366 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:26:30.246377 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:26:30.246388 systemd-journald[1142]: Journal started Feb 13 19:26:30.246409 systemd-journald[1142]: Runtime Journal (/run/log/journal/46d2f5a37e034824a7bd27fbafe25607) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:26:30.248622 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:26:30.249875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:26:30.251099 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:26:30.252214 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:26:30.252375 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:26:30.253516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:26:30.253688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:26:30.254770 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:26:30.254922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:26:30.256053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:26:30.256204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:26:30.257367 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:26:30.257527 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:26:30.258600 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:26:30.258903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:26:30.260092 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:26:30.261243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:26:30.262795 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:26:30.274590 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:26:30.288088 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:26:30.290120 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:26:30.291000 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:26:30.295078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:26:30.297001 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:26:30.297857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:26:30.300051 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:26:30.300877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:26:30.305721 systemd-journald[1142]: Time spent on flushing to /var/log/journal/46d2f5a37e034824a7bd27fbafe25607 is 15.862ms for 847 entries. Feb 13 19:26:30.305721 systemd-journald[1142]: System Journal (/var/log/journal/46d2f5a37e034824a7bd27fbafe25607) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:26:30.330286 systemd-journald[1142]: Received client request to flush runtime journal. Feb 13 19:26:30.305317 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:26:30.308061 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:26:30.310698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:26:30.311840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:26:30.313282 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:26:30.325127 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:26:30.326375 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:26:30.327876 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:26:30.330965 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:26:30.332311 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:26:30.339611 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:26:30.343014 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Feb 13 19:26:30.343031 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Feb 13 19:26:30.347283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:26:30.355283 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:26:30.374462 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:26:30.390376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:26:30.401304 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Feb 13 19:26:30.401323 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Feb 13 19:26:30.405230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:26:30.747167 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:26:30.762202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:26:30.780344 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Feb 13 19:26:30.793917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:26:30.804443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:26:30.828266 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:26:30.830429 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 19:26:30.864299 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1239) Feb 13 19:26:30.894183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:26:30.895433 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:26:30.940213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:26:30.950217 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:26:30.952616 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:26:30.970639 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:26:30.971849 systemd-networkd[1230]: lo: Link UP Feb 13 19:26:30.971856 systemd-networkd[1230]: lo: Gained carrier Feb 13 19:26:30.972609 systemd-networkd[1230]: Enumeration completed Feb 13 19:26:30.972811 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:26:30.973254 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:26:30.973258 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:26:30.973837 systemd-networkd[1230]: eth0: Link UP Feb 13 19:26:30.973841 systemd-networkd[1230]: eth0: Gained carrier Feb 13 19:26:30.973853 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:26:30.983174 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:26:30.990044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:26:30.993045 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:26:31.002536 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:26:31.004091 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:26:31.018266 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:26:31.022296 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:26:31.056716 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:26:31.057934 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:26:31.059086 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:26:31.059118 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:26:31.059848 systemd[1]: Reached target machines.target - Containers. Feb 13 19:26:31.061643 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:26:31.075145 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:26:31.077334 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:26:31.078284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:26:31.079207 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:26:31.082261 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:26:31.086548 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:26:31.088865 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:26:31.098325 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 19:26:31.104381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:26:31.114008 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:26:31.117547 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:26:31.118299 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:26:31.143114 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 19:26:31.183045 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:26:31.224007 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 19:26:31.234000 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 19:26:31.245011 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:26:31.250782 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:26:31.251256 (sd-merge)[1291]: Merged extensions into '/usr'. Feb 13 19:26:31.255013 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:26:31.255028 systemd[1]: Reloading... Feb 13 19:26:31.299015 zram_generator::config[1319]: No configuration found. Feb 13 19:26:31.335223 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:26:31.403423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:26:31.446057 systemd[1]: Reloading finished in 190 ms. Feb 13 19:26:31.463063 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:26:31.464284 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:26:31.482182 systemd[1]: Starting ensure-sysext.service... Feb 13 19:26:31.484148 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:26:31.489303 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:26:31.489319 systemd[1]: Reloading... Feb 13 19:26:31.501659 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:26:31.501918 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:26:31.502571 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:26:31.502812 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Feb 13 19:26:31.502867 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Feb 13 19:26:31.505180 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:26:31.505192 systemd-tmpfiles[1361]: Skipping /boot Feb 13 19:26:31.512701 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:26:31.512716 systemd-tmpfiles[1361]: Skipping /boot Feb 13 19:26:31.536139 zram_generator::config[1391]: No configuration found. Feb 13 19:26:31.628366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:26:31.671569 systemd[1]: Reloading finished in 181 ms. Feb 13 19:26:31.686765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:26:31.709283 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:26:31.711682 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:26:31.714009 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:26:31.717236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:26:31.720085 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:26:31.734636 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:26:31.739507 systemd[1]: Finished ensure-sysext.service. Feb 13 19:26:31.744417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:26:31.758168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:26:31.763152 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:26:31.765217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:26:31.769029 augenrules[1461]: No rules Feb 13 19:26:31.769813 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:26:31.773208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:26:31.776634 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:26:31.778267 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:26:31.780371 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:26:31.781803 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:26:31.783174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:26:31.783331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:26:31.784829 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:26:31.784998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:26:31.786086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:26:31.786232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:26:31.787563 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:26:31.787793 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:26:31.794163 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:26:31.794253 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:26:31.804412 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:26:31.805238 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:26:31.817632 systemd-resolved[1436]: Positive Trust Anchors: Feb 13 19:26:31.820804 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:26:31.820844 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:26:31.825338 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:26:31.830781 systemd-resolved[1436]: Defaulting to hostname 'linux'. Feb 13 19:26:31.834422 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:26:31.835356 systemd[1]: Reached target network.target - Network. Feb 13 19:26:31.836016 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:26:31.860364 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:26:31.861362 systemd-timesyncd[1471]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:26:31.861409 systemd-timesyncd[1471]: Initial clock synchronization to Thu 2025-02-13 19:26:32.111204 UTC. Feb 13 19:26:31.861799 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:26:31.862816 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:26:31.864025 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:26:31.864994 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:26:31.865946 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:26:31.865993 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:26:31.866690 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:26:31.867691 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:26:31.868702 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:26:31.869814 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:26:31.871484 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:26:31.873844 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:26:31.876007 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:26:31.884080 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:26:31.884956 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:26:31.885671 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:26:31.886642 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:26:31.886696 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:26:31.886725 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:26:31.888159 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:26:31.890130 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:26:31.891928 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:26:31.896169 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:26:31.897043 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:26:31.898291 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:26:31.902042 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:26:31.905135 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:26:31.908086 jq[1490]: false Feb 13 19:26:31.913219 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:26:31.916648 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:26:31.930062 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:26:31.933204 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:26:31.934037 extend-filesystems[1491]: Found loop3 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found loop4 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found loop5 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda1 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda2 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda3 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found usr Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda4 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda6 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda7 Feb 13 19:26:31.934037 extend-filesystems[1491]: Found vda9 Feb 13 19:26:31.934037 extend-filesystems[1491]: Checking size of /dev/vda9 Feb 13 19:26:31.976126 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1227) Feb 13 19:26:31.976154 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:26:31.976167 extend-filesystems[1491]: Resized partition /dev/vda9 Feb 13 19:26:31.936090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:26:31.951108 dbus-daemon[1489]: [system] SELinux support is enabled Feb 13 19:26:31.980552 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:26:31.942366 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:26:31.942603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:26:31.988285 jq[1508]: true Feb 13 19:26:31.945342 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:26:31.945561 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:26:31.988647 tar[1513]: linux-arm64/helm Feb 13 19:26:31.955301 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:26:31.994503 jq[1522]: true Feb 13 19:26:31.959005 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:26:31.960112 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:26:31.983711 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:26:32.011032 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:26:32.014400 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:26:32.014449 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:26:32.015933 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:26:32.015976 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:26:32.044177 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:26:32.044177 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:26:32.044177 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:26:32.054969 extend-filesystems[1491]: Resized filesystem in /dev/vda9 Feb 13 19:26:32.047700 systemd-logind[1501]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:26:32.057078 update_engine[1506]: I20250213 19:26:32.045372 1506 main.cc:92] Flatcar Update Engine starting Feb 13 19:26:32.057078 update_engine[1506]: I20250213 19:26:32.053090 1506 update_check_scheduler.cc:74] Next update check in 9m3s Feb 13 19:26:32.048303 systemd-logind[1501]: New seat seat0. Feb 13 19:26:32.051319 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:26:32.055653 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:26:32.055891 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:26:32.060250 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:26:32.062768 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:26:32.072345 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:26:32.092831 bash[1552]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:26:32.096197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:26:32.098591 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:26:32.138510 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:26:32.257940 containerd[1523]: time="2025-02-13T19:26:32.256410016Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:26:32.286774 containerd[1523]: time="2025-02-13T19:26:32.286650894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:32.288403 containerd[1523]: time="2025-02-13T19:26:32.288366671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:32.288583 containerd[1523]: time="2025-02-13T19:26:32.288566155Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:26:32.288727 containerd[1523]: time="2025-02-13T19:26:32.288711064Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:26:32.289262 containerd[1523]: time="2025-02-13T19:26:32.289240256Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:26:32.289444 containerd[1523]: time="2025-02-13T19:26:32.289365283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:32.289847 containerd[1523]: time="2025-02-13T19:26:32.289822453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:32.289940 containerd[1523]: time="2025-02-13T19:26:32.289925907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:32.290396 containerd[1523]: time="2025-02-13T19:26:32.290361874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:32.290634 containerd[1523]: time="2025-02-13T19:26:32.290524893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:32.290634 containerd[1523]: time="2025-02-13T19:26:32.290559625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:32.290634 containerd[1523]: time="2025-02-13T19:26:32.290571216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:32.291038 containerd[1523]: time="2025-02-13T19:26:32.290900388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:32.291545 containerd[1523]: time="2025-02-13T19:26:32.291525443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:32.291926 containerd[1523]: time="2025-02-13T19:26:32.291877550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:32.291926 containerd[1523]: time="2025-02-13T19:26:32.291899000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:26:32.292372 containerd[1523]: time="2025-02-13T19:26:32.292226769Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:26:32.292372 containerd[1523]: time="2025-02-13T19:26:32.292298956Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:26:32.296358 containerd[1523]: time="2025-02-13T19:26:32.295892169Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:26:32.296358 containerd[1523]: time="2025-02-13T19:26:32.295941916Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:26:32.296358 containerd[1523]: time="2025-02-13T19:26:32.295961716Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:26:32.296358 containerd[1523]: time="2025-02-13T19:26:32.295978793Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:26:32.296358 containerd[1523]: time="2025-02-13T19:26:32.296018640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:26:32.296358 containerd[1523]: time="2025-02-13T19:26:32.296185495Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:26:32.297578 containerd[1523]: time="2025-02-13T19:26:32.297542112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:26:32.297755 containerd[1523]: time="2025-02-13T19:26:32.297710081Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:26:32.297755 containerd[1523]: time="2025-02-13T19:26:32.297737182Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:26:32.297755 containerd[1523]: time="2025-02-13T19:26:32.297751413Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:26:32.297827 containerd[1523]: time="2025-02-13T19:26:32.297767500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297827 containerd[1523]: time="2025-02-13T19:26:32.297782556Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297827 containerd[1523]: time="2025-02-13T19:26:32.297796829Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297827 containerd[1523]: time="2025-02-13T19:26:32.297810771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297898 containerd[1523]: time="2025-02-13T19:26:32.297825621Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297898 containerd[1523]: time="2025-02-13T19:26:32.297839811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297898 containerd[1523]: time="2025-02-13T19:26:32.297852186Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297898 containerd[1523]: time="2025-02-13T19:26:32.297865056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:26:32.297898 containerd[1523]: time="2025-02-13T19:26:32.297886134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298004 containerd[1523]: time="2025-02-13T19:26:32.297901231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298004 containerd[1523]: time="2025-02-13T19:26:32.297913978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298004 containerd[1523]: time="2025-02-13T19:26:32.297926765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298004 containerd[1523]: time="2025-02-13T19:26:32.297939181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298004 containerd[1523]: time="2025-02-13T19:26:32.297979564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298112 containerd[1523]: time="2025-02-13T19:26:32.298016648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298112 containerd[1523]: time="2025-02-13T19:26:32.298034591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298112 containerd[1523]: time="2025-02-13T19:26:32.298048121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298112 containerd[1523]: time="2025-02-13T19:26:32.298063466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298112 containerd[1523]: time="2025-02-13T19:26:32.298076295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298112 containerd[1523]: time="2025-02-13T19:26:32.298088505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298112 containerd[1523]: time="2025-02-13T19:26:32.298100096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298116431Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298140397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298154133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298166384Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298309561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298326721Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298337982Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298352626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:26:32.298380 containerd[1523]: time="2025-02-13T19:26:32.298362072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298537 containerd[1523]: time="2025-02-13T19:26:32.298392886Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:26:32.298537 containerd[1523]: time="2025-02-13T19:26:32.298403528Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:26:32.298537 containerd[1523]: time="2025-02-13T19:26:32.298415697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:26:32.298775 containerd[1523]: time="2025-02-13T19:26:32.298698628Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:26:32.298889 containerd[1523]: time="2025-02-13T19:26:32.298774568Z" level=info msg="Connect containerd service" Feb 13 19:26:32.299000 containerd[1523]: time="2025-02-13T19:26:32.298986674Z" level=info msg="using legacy CRI server" Feb 13 19:26:32.299039 containerd[1523]: time="2025-02-13T19:26:32.298998966Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:26:32.299149 containerd[1523]: time="2025-02-13T19:26:32.299131130Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:26:32.302890 containerd[1523]: time="2025-02-13T19:26:32.302846235Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:26:32.303487 containerd[1523]: time="2025-02-13T19:26:32.303369734Z" level=info msg="Start subscribing containerd event" Feb 13 19:26:32.303487 containerd[1523]: time="2025-02-13T19:26:32.303440436Z" level=info msg="Start recovering state" Feb 13 19:26:32.303643 containerd[1523]: time="2025-02-13T19:26:32.303601475Z" level=info msg="Start event monitor" Feb 13 19:26:32.303707 containerd[1523]: time="2025-02-13T19:26:32.303694823Z" level=info msg="Start snapshots syncer" Feb 13 19:26:32.304174 containerd[1523]: time="2025-02-13T19:26:32.303743951Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:26:32.304174 containerd[1523]: time="2025-02-13T19:26:32.303805702Z" level=info msg="Start streaming server" Feb 13 19:26:32.304174 containerd[1523]: time="2025-02-13T19:26:32.303878961Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:26:32.304174 containerd[1523]: time="2025-02-13T19:26:32.303924500Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:26:32.304546 containerd[1523]: time="2025-02-13T19:26:32.304522290Z" level=info msg="containerd successfully booted in 0.049771s" Feb 13 19:26:32.304549 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:26:32.385012 tar[1513]: linux-arm64/LICENSE Feb 13 19:26:32.385012 tar[1513]: linux-arm64/README.md Feb 13 19:26:32.402866 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:26:32.687759 systemd-networkd[1230]: eth0: Gained IPv6LL Feb 13 19:26:32.690547 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:26:32.692292 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:26:32.702550 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:26:32.705256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:32.707557 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:26:32.729482 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:26:32.731351 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:26:32.731585 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:26:32.732829 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:26:32.795667 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:26:32.815193 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:26:32.826235 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:26:32.831243 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:26:32.831450 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:26:32.834374 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:26:32.847569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:26:32.850181 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:26:32.852045 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:26:32.853213 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:26:33.225833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:33.227408 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:26:33.228608 systemd[1]: Startup finished in 5.572s (kernel) + 3.576s (userspace) = 9.148s. Feb 13 19:26:33.229632 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:26:33.699017 kubelet[1625]: E0213 19:26:33.698874 1625 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:26:33.701328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:26:33.701559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:26:37.204190 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:26:37.214256 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:36760.service - OpenSSH per-connection server daemon (10.0.0.1:36760). Feb 13 19:26:37.269196 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 36760 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:26:37.272937 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:37.288275 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:26:37.298290 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:26:37.299801 systemd-logind[1501]: New session 1 of user core. Feb 13 19:26:37.309139 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:26:37.311604 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:26:37.319571 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:26:37.405576 systemd[1645]: Queued start job for default target default.target. Feb 13 19:26:37.405965 systemd[1645]: Created slice app.slice - User Application Slice. Feb 13 19:26:37.406011 systemd[1645]: Reached target paths.target - Paths. Feb 13 19:26:37.406026 systemd[1645]: Reached target timers.target - Timers. Feb 13 19:26:37.414121 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:26:37.420646 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:26:37.420716 systemd[1645]: Reached target sockets.target - Sockets. Feb 13 19:26:37.420729 systemd[1645]: Reached target basic.target - Basic System. Feb 13 19:26:37.420767 systemd[1645]: Reached target default.target - Main User Target. Feb 13 19:26:37.420792 systemd[1645]: Startup finished in 95ms. Feb 13 19:26:37.421405 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:26:37.423585 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:26:37.488336 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:36766.service - OpenSSH per-connection server daemon (10.0.0.1:36766). Feb 13 19:26:37.528095 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 36766 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:26:37.529473 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:37.534059 systemd-logind[1501]: New session 2 of user core. Feb 13 19:26:37.550294 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:26:37.606367 sshd[1657]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:37.618385 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:36776.service - OpenSSH per-connection server daemon (10.0.0.1:36776). Feb 13 19:26:37.619161 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:36766.service: Deactivated successfully. Feb 13 19:26:37.621630 systemd-logind[1501]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:26:37.621882 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:26:37.623678 systemd-logind[1501]: Removed session 2. Feb 13 19:26:37.648661 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 36776 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:26:37.649934 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:37.653972 systemd-logind[1501]: New session 3 of user core. Feb 13 19:26:37.665357 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:26:37.713897 sshd[1662]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:37.725237 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:36782.service - OpenSSH per-connection server daemon (10.0.0.1:36782). Feb 13 19:26:37.725912 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:36776.service: Deactivated successfully. Feb 13 19:26:37.727929 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:26:37.728048 systemd-logind[1501]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:26:37.729815 systemd-logind[1501]: Removed session 3. Feb 13 19:26:37.755047 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 36782 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:26:37.756020 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:37.760316 systemd-logind[1501]: New session 4 of user core. Feb 13 19:26:37.776306 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:26:37.829441 sshd[1670]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:37.840277 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:36786.service - OpenSSH per-connection server daemon (10.0.0.1:36786). Feb 13 19:26:37.840656 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:36782.service: Deactivated successfully. Feb 13 19:26:37.842466 systemd-logind[1501]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:26:37.842949 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:26:37.844449 systemd-logind[1501]: Removed session 4. Feb 13 19:26:37.869437 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 36786 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:26:37.870606 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:37.874172 systemd-logind[1501]: New session 5 of user core. Feb 13 19:26:37.891248 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:26:37.951178 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:26:37.951465 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:37.965754 sudo[1685]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:37.967536 sshd[1678]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:37.979214 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:36794.service - OpenSSH per-connection server daemon (10.0.0.1:36794). Feb 13 19:26:37.979580 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:36786.service: Deactivated successfully. Feb 13 19:26:37.981850 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:26:37.982287 systemd-logind[1501]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:26:37.983196 systemd-logind[1501]: Removed session 5. Feb 13 19:26:38.008505 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 36794 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:26:38.009628 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:38.013022 systemd-logind[1501]: New session 6 of user core. Feb 13 19:26:38.020286 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:26:38.072123 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:26:38.072408 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:38.075405 sudo[1695]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:38.080224 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:26:38.080510 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:38.100247 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:26:38.101663 auditctl[1698]: No rules Feb 13 19:26:38.102545 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:26:38.102797 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:26:38.104598 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:26:38.128447 augenrules[1717]: No rules Feb 13 19:26:38.129708 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:26:38.130847 sudo[1694]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:38.132368 sshd[1687]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:38.140222 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:36800.service - OpenSSH per-connection server daemon (10.0.0.1:36800). Feb 13 19:26:38.140654 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:36794.service: Deactivated successfully. Feb 13 19:26:38.142043 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:26:38.142837 systemd-logind[1501]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:26:38.144036 systemd-logind[1501]: Removed session 6. Feb 13 19:26:38.171654 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 36800 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:26:38.172915 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:38.176458 systemd-logind[1501]: New session 7 of user core. Feb 13 19:26:38.189236 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:26:38.240321 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:26:38.240953 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:38.556306 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:26:38.556490 (dockerd)[1748]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:26:38.844818 dockerd[1748]: time="2025-02-13T19:26:38.843947592Z" level=info msg="Starting up" Feb 13 19:26:39.228765 dockerd[1748]: time="2025-02-13T19:26:39.228660670Z" level=info msg="Loading containers: start." Feb 13 19:26:39.311019 kernel: Initializing XFRM netlink socket Feb 13 19:26:39.376102 systemd-networkd[1230]: docker0: Link UP Feb 13 19:26:39.399416 dockerd[1748]: time="2025-02-13T19:26:39.399373631Z" level=info msg="Loading containers: done." Feb 13 19:26:39.411787 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1937084106-merged.mount: Deactivated successfully. Feb 13 19:26:39.414720 dockerd[1748]: time="2025-02-13T19:26:39.414670822Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:26:39.414838 dockerd[1748]: time="2025-02-13T19:26:39.414812580Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:26:39.414937 dockerd[1748]: time="2025-02-13T19:26:39.414915710Z" level=info msg="Daemon has completed initialization" Feb 13 19:26:39.448157 dockerd[1748]: time="2025-02-13T19:26:39.448017075Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:26:39.448266 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:26:40.158162 containerd[1523]: time="2025-02-13T19:26:40.158121808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:26:41.062831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325577418.mount: Deactivated successfully. Feb 13 19:26:42.579922 containerd[1523]: time="2025-02-13T19:26:42.579845405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:42.580388 containerd[1523]: time="2025-02-13T19:26:42.580355644Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:26:42.581263 containerd[1523]: time="2025-02-13T19:26:42.581217953Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:42.584170 containerd[1523]: time="2025-02-13T19:26:42.584128640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:42.585504 containerd[1523]: time="2025-02-13T19:26:42.585440131Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.427272497s" Feb 13 19:26:42.585504 containerd[1523]: time="2025-02-13T19:26:42.585475660Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:26:42.605582 containerd[1523]: time="2025-02-13T19:26:42.605467304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:26:43.951962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:26:43.961190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:44.056485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:44.060948 (kubelet)[1977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:26:44.107098 kubelet[1977]: E0213 19:26:44.107031 1977 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:26:44.110543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:26:44.110755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:26:44.885435 containerd[1523]: time="2025-02-13T19:26:44.885385903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:44.886321 containerd[1523]: time="2025-02-13T19:26:44.885826820Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:26:44.887287 containerd[1523]: time="2025-02-13T19:26:44.887250965Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:44.891050 containerd[1523]: time="2025-02-13T19:26:44.890968008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:44.892319 containerd[1523]: time="2025-02-13T19:26:44.892149395Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.286631058s" Feb 13 19:26:44.892319 containerd[1523]: time="2025-02-13T19:26:44.892196127Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:26:44.912864 containerd[1523]: time="2025-02-13T19:26:44.912817916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:26:46.350427 containerd[1523]: time="2025-02-13T19:26:46.350373617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:46.351585 containerd[1523]: time="2025-02-13T19:26:46.351520234Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:26:46.352589 containerd[1523]: time="2025-02-13T19:26:46.352565928Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:46.356018 containerd[1523]: time="2025-02-13T19:26:46.355855982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:46.358207 containerd[1523]: time="2025-02-13T19:26:46.358148775Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.445285997s" Feb 13 19:26:46.358207 containerd[1523]: time="2025-02-13T19:26:46.358198011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:26:46.378062 containerd[1523]: time="2025-02-13T19:26:46.378022218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:26:47.548004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260599650.mount: Deactivated successfully. Feb 13 19:26:47.949125 containerd[1523]: time="2025-02-13T19:26:47.948902466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:47.956209 containerd[1523]: time="2025-02-13T19:26:47.956137195Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:26:47.958727 containerd[1523]: time="2025-02-13T19:26:47.958679186Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:47.963005 containerd[1523]: time="2025-02-13T19:26:47.962942923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:47.963832 containerd[1523]: time="2025-02-13T19:26:47.963793534Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.585728478s" Feb 13 19:26:47.963869 containerd[1523]: time="2025-02-13T19:26:47.963830369Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:26:47.984916 containerd[1523]: time="2025-02-13T19:26:47.984869525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:26:48.689768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628546997.mount: Deactivated successfully. Feb 13 19:26:49.321784 containerd[1523]: time="2025-02-13T19:26:49.321734855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:49.322807 containerd[1523]: time="2025-02-13T19:26:49.322565848Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:26:49.323649 containerd[1523]: time="2025-02-13T19:26:49.323574694Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:49.328837 containerd[1523]: time="2025-02-13T19:26:49.327205695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:49.328837 containerd[1523]: time="2025-02-13T19:26:49.328430635Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.343517859s" Feb 13 19:26:49.328837 containerd[1523]: time="2025-02-13T19:26:49.328464304Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:26:49.348843 containerd[1523]: time="2025-02-13T19:26:49.348802131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:26:49.863811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207087205.mount: Deactivated successfully. Feb 13 19:26:49.867732 containerd[1523]: time="2025-02-13T19:26:49.867686594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:49.868762 containerd[1523]: time="2025-02-13T19:26:49.868724333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:26:49.869727 containerd[1523]: time="2025-02-13T19:26:49.869681893Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:49.873028 containerd[1523]: time="2025-02-13T19:26:49.872639327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:49.873347 containerd[1523]: time="2025-02-13T19:26:49.873198085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 524.352173ms" Feb 13 19:26:49.873347 containerd[1523]: time="2025-02-13T19:26:49.873229064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:26:49.892531 containerd[1523]: time="2025-02-13T19:26:49.892498334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:26:50.476892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171398434.mount: Deactivated successfully. Feb 13 19:26:53.152117 containerd[1523]: time="2025-02-13T19:26:53.152066226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:53.153650 containerd[1523]: time="2025-02-13T19:26:53.153613347Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:26:53.154611 containerd[1523]: time="2025-02-13T19:26:53.154577448Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:53.158357 containerd[1523]: time="2025-02-13T19:26:53.158314143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:53.160333 containerd[1523]: time="2025-02-13T19:26:53.160294162Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.267629867s" Feb 13 19:26:53.160397 containerd[1523]: time="2025-02-13T19:26:53.160335841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:26:54.360965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:26:54.370182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:54.547106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:54.551710 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:26:54.602480 kubelet[2215]: E0213 19:26:54.602383 2215 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:26:54.605151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:26:54.605351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:26:57.352942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:57.362183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:57.378902 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-7.scope)... Feb 13 19:26:57.379049 systemd[1]: Reloading... Feb 13 19:26:57.447014 zram_generator::config[2271]: No configuration found. Feb 13 19:26:57.557687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:26:57.607119 systemd[1]: Reloading finished in 227 ms. Feb 13 19:26:57.638466 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:26:57.638529 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:26:57.638784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:57.640652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:57.737563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:57.742280 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:26:57.781943 kubelet[2328]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:26:57.781943 kubelet[2328]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:26:57.781943 kubelet[2328]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:26:57.782855 kubelet[2328]: I0213 19:26:57.782797 2328 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:26:58.877800 kubelet[2328]: I0213 19:26:58.877753 2328 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:26:58.877800 kubelet[2328]: I0213 19:26:58.877789 2328 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:26:58.878232 kubelet[2328]: I0213 19:26:58.878008 2328 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:26:58.933420 kubelet[2328]: E0213 19:26:58.933373 2328 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.933724 kubelet[2328]: I0213 19:26:58.933657 2328 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:26:58.944563 kubelet[2328]: I0213 19:26:58.944532 2328 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:26:58.946189 kubelet[2328]: I0213 19:26:58.946127 2328 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:26:58.946360 kubelet[2328]: I0213 19:26:58.946182 2328 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:26:58.946452 kubelet[2328]: I0213 19:26:58.946426 2328 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:26:58.946452 kubelet[2328]: I0213 19:26:58.946437 2328 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:26:58.946713 kubelet[2328]: I0213 19:26:58.946689 2328 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:26:58.947929 kubelet[2328]: I0213 19:26:58.947906 2328 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:26:58.947929 kubelet[2328]: I0213 19:26:58.947928 2328 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:26:58.948771 kubelet[2328]: I0213 19:26:58.948478 2328 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:26:58.948899 kubelet[2328]: I0213 19:26:58.948884 2328 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:26:58.949844 kubelet[2328]: W0213 19:26:58.949751 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.949844 kubelet[2328]: E0213 19:26:58.949823 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.952185 kubelet[2328]: I0213 19:26:58.952166 2328 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:26:58.952798 kubelet[2328]: W0213 19:26:58.952186 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.952798 kubelet[2328]: E0213 19:26:58.952374 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.952798 kubelet[2328]: I0213 19:26:58.952742 2328 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:26:58.952798 kubelet[2328]: W0213 19:26:58.952785 2328 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:26:58.953807 kubelet[2328]: I0213 19:26:58.953786 2328 server.go:1264] "Started kubelet" Feb 13 19:26:58.954408 kubelet[2328]: I0213 19:26:58.954374 2328 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:26:58.954944 kubelet[2328]: I0213 19:26:58.954896 2328 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:26:58.955350 kubelet[2328]: I0213 19:26:58.955219 2328 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:26:58.955582 kubelet[2328]: I0213 19:26:58.955560 2328 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:26:58.957498 kubelet[2328]: I0213 19:26:58.957463 2328 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:26:58.958339 kubelet[2328]: E0213 19:26:58.957343 2328 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db21b779ca39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:26:58.953759289 +0000 UTC m=+1.208028417,LastTimestamp:2025-02-13 19:26:58.953759289 +0000 UTC m=+1.208028417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:26:58.958339 kubelet[2328]: I0213 19:26:58.958155 2328 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:26:58.958339 kubelet[2328]: I0213 19:26:58.958239 2328 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:26:58.959311 kubelet[2328]: I0213 19:26:58.959296 2328 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:26:58.959918 kubelet[2328]: W0213 19:26:58.959879 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.960031 kubelet[2328]: E0213 19:26:58.960018 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.960273 kubelet[2328]: E0213 19:26:58.960232 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Feb 13 19:26:58.960740 kubelet[2328]: I0213 19:26:58.960719 2328 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:26:58.960901 kubelet[2328]: I0213 19:26:58.960881 2328 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:26:58.961161 kubelet[2328]: E0213 19:26:58.961106 2328 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:26:58.963390 kubelet[2328]: I0213 19:26:58.963355 2328 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:26:58.973694 kubelet[2328]: I0213 19:26:58.973642 2328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:26:58.974906 kubelet[2328]: I0213 19:26:58.974866 2328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:26:58.975165 kubelet[2328]: I0213 19:26:58.975148 2328 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:26:58.975215 kubelet[2328]: I0213 19:26:58.975172 2328 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:26:58.975258 kubelet[2328]: E0213 19:26:58.975235 2328 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:26:58.975805 kubelet[2328]: W0213 19:26:58.975748 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.975864 kubelet[2328]: E0213 19:26:58.975806 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:26:58.981488 kubelet[2328]: I0213 19:26:58.981464 2328 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:26:58.981608 kubelet[2328]: I0213 19:26:58.981595 2328 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:26:58.981663 kubelet[2328]: I0213 19:26:58.981655 2328 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:26:58.983698 kubelet[2328]: I0213 19:26:58.983674 2328 policy_none.go:49] "None policy: Start" Feb 13 19:26:58.984499 kubelet[2328]: I0213 19:26:58.984476 2328 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:26:58.984618 kubelet[2328]: I0213 19:26:58.984533 2328 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:26:58.990664 kubelet[2328]: I0213 19:26:58.990171 2328 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:26:58.990664 kubelet[2328]: I0213 19:26:58.990365 2328 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:26:58.990664 kubelet[2328]: I0213 19:26:58.990473 2328 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:26:58.991889 kubelet[2328]: E0213 19:26:58.991850 2328 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:26:59.059235 kubelet[2328]: I0213 19:26:59.059195 2328 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:26:59.059744 kubelet[2328]: E0213 19:26:59.059705 2328 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 19:26:59.076104 kubelet[2328]: I0213 19:26:59.076031 2328 topology_manager.go:215] "Topology Admit Handler" podUID="b1e55f972deabafdf582be3ee25a3ded" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:26:59.077425 kubelet[2328]: I0213 19:26:59.077366 2328 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:26:59.078313 kubelet[2328]: I0213 19:26:59.078244 2328 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:26:59.160359 kubelet[2328]: I0213 19:26:59.160232 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1e55f972deabafdf582be3ee25a3ded-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1e55f972deabafdf582be3ee25a3ded\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:26:59.160359 kubelet[2328]: I0213 19:26:59.160304 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:26:59.160359 kubelet[2328]: I0213 19:26:59.160327 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:26:59.160359 kubelet[2328]: I0213 19:26:59.160350 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1e55f972deabafdf582be3ee25a3ded-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1e55f972deabafdf582be3ee25a3ded\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:26:59.160359 kubelet[2328]: I0213 19:26:59.160364 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:26:59.160557 kubelet[2328]: I0213 19:26:59.160380 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1e55f972deabafdf582be3ee25a3ded-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1e55f972deabafdf582be3ee25a3ded\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:26:59.160557 kubelet[2328]: I0213 19:26:59.160397 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:26:59.160557 kubelet[2328]: I0213 19:26:59.160411 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:26:59.160557 kubelet[2328]: I0213 19:26:59.160425 2328 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:26:59.161280 kubelet[2328]: E0213 19:26:59.161216 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Feb 13 19:26:59.261603 kubelet[2328]: I0213 19:26:59.261542 2328 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:26:59.261872 kubelet[2328]: E0213 19:26:59.261850 2328 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 19:26:59.388750 kubelet[2328]: E0213 19:26:59.388707 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:26:59.389423 containerd[1523]: time="2025-02-13T19:26:59.389372587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1e55f972deabafdf582be3ee25a3ded,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:59.391559 kubelet[2328]: E0213 19:26:59.391520 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:26:59.391715 kubelet[2328]: E0213 19:26:59.391684 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:26:59.391910 containerd[1523]: time="2025-02-13T19:26:59.391872628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:59.392169 containerd[1523]: time="2025-02-13T19:26:59.392138493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:59.561778 kubelet[2328]: E0213 19:26:59.561649 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Feb 13 19:26:59.663148 kubelet[2328]: I0213 19:26:59.663106 2328 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:26:59.663511 kubelet[2328]: E0213 19:26:59.663471 2328 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 19:26:59.823876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299409294.mount: Deactivated successfully. Feb 13 19:26:59.829298 containerd[1523]: time="2025-02-13T19:26:59.829254352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:59.829843 containerd[1523]: time="2025-02-13T19:26:59.829809863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:26:59.830662 containerd[1523]: time="2025-02-13T19:26:59.830556577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:59.831725 containerd[1523]: time="2025-02-13T19:26:59.831691860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:59.832317 containerd[1523]: time="2025-02-13T19:26:59.832288766Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:26:59.833035 containerd[1523]: time="2025-02-13T19:26:59.832990562Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:26:59.833524 containerd[1523]: time="2025-02-13T19:26:59.833491146Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:59.840771 containerd[1523]: time="2025-02-13T19:26:59.840714234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:59.841648 containerd[1523]: time="2025-02-13T19:26:59.841615278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 449.674473ms" Feb 13 19:26:59.842435 containerd[1523]: time="2025-02-13T19:26:59.842297537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 452.840239ms" Feb 13 19:26:59.845008 containerd[1523]: time="2025-02-13T19:26:59.844966761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 452.774623ms" Feb 13 19:27:00.005963 containerd[1523]: time="2025-02-13T19:27:00.005856332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:00.006137 containerd[1523]: time="2025-02-13T19:27:00.005920459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:00.006137 containerd[1523]: time="2025-02-13T19:27:00.005981305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:00.006137 containerd[1523]: time="2025-02-13T19:27:00.006072572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:00.006699 containerd[1523]: time="2025-02-13T19:27:00.006531433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:00.006699 containerd[1523]: time="2025-02-13T19:27:00.006585793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:00.006699 containerd[1523]: time="2025-02-13T19:27:00.006609371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:00.006834 containerd[1523]: time="2025-02-13T19:27:00.006696235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:00.007459 containerd[1523]: time="2025-02-13T19:27:00.007234675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:00.007459 containerd[1523]: time="2025-02-13T19:27:00.007287714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:00.008336 containerd[1523]: time="2025-02-13T19:27:00.007435424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:00.009146 containerd[1523]: time="2025-02-13T19:27:00.008395497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:00.011548 kubelet[2328]: W0213 19:27:00.011494 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.013145 kubelet[2328]: E0213 19:27:00.013108 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.063305 containerd[1523]: time="2025-02-13T19:27:00.063249539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"0869aba34db704ef748a7e8ef9c1e9e311684ac003ca8fb15e227a9040e456d6\"" Feb 13 19:27:00.067438 kubelet[2328]: E0213 19:27:00.067396 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:00.069024 containerd[1523]: time="2025-02-13T19:27:00.068092614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"6284af3b906d8f68898959ca67e370a1f73153976079af55cb7856121c2b73c7\"" Feb 13 19:27:00.069920 kubelet[2328]: E0213 19:27:00.069893 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:00.070676 containerd[1523]: time="2025-02-13T19:27:00.070643027Z" level=info msg="CreateContainer within sandbox \"0869aba34db704ef748a7e8ef9c1e9e311684ac003ca8fb15e227a9040e456d6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:27:00.072139 containerd[1523]: time="2025-02-13T19:27:00.072105313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1e55f972deabafdf582be3ee25a3ded,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e8bb073d567b0703fc5779504341e62c2af6655b2133bcadc9d7ee961ffebfe\"" Feb 13 19:27:00.072712 containerd[1523]: time="2025-02-13T19:27:00.072510494Z" level=info msg="CreateContainer within sandbox \"6284af3b906d8f68898959ca67e370a1f73153976079af55cb7856121c2b73c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:27:00.073296 kubelet[2328]: E0213 19:27:00.073266 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:00.075450 containerd[1523]: time="2025-02-13T19:27:00.075350042Z" level=info msg="CreateContainer within sandbox \"0e8bb073d567b0703fc5779504341e62c2af6655b2133bcadc9d7ee961ffebfe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:27:00.097586 containerd[1523]: time="2025-02-13T19:27:00.097480951Z" level=info msg="CreateContainer within sandbox \"6284af3b906d8f68898959ca67e370a1f73153976079af55cb7856121c2b73c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cfed9d62c24dde47d15a7b070d1e189bddf93d0fc31d9613aad8745af3e9539a\"" Feb 13 19:27:00.098257 containerd[1523]: time="2025-02-13T19:27:00.098227385Z" level=info msg="StartContainer for \"cfed9d62c24dde47d15a7b070d1e189bddf93d0fc31d9613aad8745af3e9539a\"" Feb 13 19:27:00.102743 containerd[1523]: time="2025-02-13T19:27:00.102692180Z" level=info msg="CreateContainer within sandbox \"0869aba34db704ef748a7e8ef9c1e9e311684ac003ca8fb15e227a9040e456d6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c2fab2adf0280742f766ad0c844a45a6dd84faa451cfdd896bfbaca31b96eec\"" Feb 13 19:27:00.103370 containerd[1523]: time="2025-02-13T19:27:00.103345345Z" level=info msg="StartContainer for \"2c2fab2adf0280742f766ad0c844a45a6dd84faa451cfdd896bfbaca31b96eec\"" Feb 13 19:27:00.104071 containerd[1523]: time="2025-02-13T19:27:00.103961322Z" level=info msg="CreateContainer within sandbox \"0e8bb073d567b0703fc5779504341e62c2af6655b2133bcadc9d7ee961ffebfe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ff67b009202795f9022ffa92cbd1aa1e7c629ab2fa9a1a0daff2769abece716\"" Feb 13 19:27:00.104513 containerd[1523]: time="2025-02-13T19:27:00.104461693Z" level=info msg="StartContainer for \"6ff67b009202795f9022ffa92cbd1aa1e7c629ab2fa9a1a0daff2769abece716\"" Feb 13 19:27:00.118629 kubelet[2328]: W0213 19:27:00.118590 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.118629 kubelet[2328]: E0213 19:27:00.118632 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.161258 containerd[1523]: time="2025-02-13T19:27:00.161197332Z" level=info msg="StartContainer for \"6ff67b009202795f9022ffa92cbd1aa1e7c629ab2fa9a1a0daff2769abece716\" returns successfully" Feb 13 19:27:00.165610 containerd[1523]: time="2025-02-13T19:27:00.164874622Z" level=info msg="StartContainer for \"2c2fab2adf0280742f766ad0c844a45a6dd84faa451cfdd896bfbaca31b96eec\" returns successfully" Feb 13 19:27:00.165610 containerd[1523]: time="2025-02-13T19:27:00.164942632Z" level=info msg="StartContainer for \"cfed9d62c24dde47d15a7b070d1e189bddf93d0fc31d9613aad8745af3e9539a\" returns successfully" Feb 13 19:27:00.332333 kubelet[2328]: W0213 19:27:00.332121 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.332333 kubelet[2328]: E0213 19:27:00.332194 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.362371 kubelet[2328]: E0213 19:27:00.362310 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" Feb 13 19:27:00.401781 kubelet[2328]: W0213 19:27:00.401687 2328 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.401781 kubelet[2328]: E0213 19:27:00.401760 2328 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 19:27:00.465576 kubelet[2328]: I0213 19:27:00.465542 2328 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:27:00.986550 kubelet[2328]: E0213 19:27:00.986512 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:00.992598 kubelet[2328]: E0213 19:27:00.992574 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:00.994650 kubelet[2328]: E0213 19:27:00.994626 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:02.011965 kubelet[2328]: E0213 19:27:02.011923 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:02.061421 kubelet[2328]: E0213 19:27:02.061370 2328 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:27:02.217338 kubelet[2328]: I0213 19:27:02.217288 2328 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:27:02.228927 kubelet[2328]: E0213 19:27:02.227905 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:27:02.328361 kubelet[2328]: E0213 19:27:02.327989 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:27:02.429161 kubelet[2328]: E0213 19:27:02.429113 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:27:02.529820 kubelet[2328]: E0213 19:27:02.529772 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:27:02.630323 kubelet[2328]: E0213 19:27:02.630286 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:27:02.690438 kubelet[2328]: E0213 19:27:02.690365 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:02.731347 kubelet[2328]: E0213 19:27:02.731297 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:27:02.951060 kubelet[2328]: I0213 19:27:02.950411 2328 apiserver.go:52] "Watching apiserver" Feb 13 19:27:02.959060 kubelet[2328]: I0213 19:27:02.959025 2328 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:27:04.203194 systemd[1]: Reloading requested from client PID 2601 ('systemctl') (unit session-7.scope)... Feb 13 19:27:04.203209 systemd[1]: Reloading... Feb 13 19:27:04.263023 zram_generator::config[2640]: No configuration found. Feb 13 19:27:04.362133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:27:04.421426 systemd[1]: Reloading finished in 217 ms. Feb 13 19:27:04.449466 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:27:04.463836 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:27:04.464174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:27:04.472282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:27:04.564293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:27:04.569962 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:27:04.615345 kubelet[2692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:27:04.615345 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:27:04.615345 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:27:04.615720 kubelet[2692]: I0213 19:27:04.615385 2692 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:27:04.619755 kubelet[2692]: I0213 19:27:04.619699 2692 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:27:04.619755 kubelet[2692]: I0213 19:27:04.619730 2692 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:27:04.619922 kubelet[2692]: I0213 19:27:04.619906 2692 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:27:04.621407 kubelet[2692]: I0213 19:27:04.621381 2692 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:27:04.622822 kubelet[2692]: I0213 19:27:04.622785 2692 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:27:04.629594 kubelet[2692]: I0213 19:27:04.628051 2692 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:27:04.629594 kubelet[2692]: I0213 19:27:04.628499 2692 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:27:04.629594 kubelet[2692]: I0213 19:27:04.628530 2692 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:27:04.629594 kubelet[2692]: I0213 19:27:04.628819 2692 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:27:04.629815 kubelet[2692]: I0213 19:27:04.628830 2692 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:27:04.629815 kubelet[2692]: I0213 19:27:04.628868 2692 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:27:04.629815 kubelet[2692]: I0213 19:27:04.629001 2692 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:27:04.629815 kubelet[2692]: I0213 19:27:04.629015 2692 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:27:04.629815 kubelet[2692]: I0213 19:27:04.629048 2692 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:27:04.629815 kubelet[2692]: I0213 19:27:04.629064 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:27:04.629980 kubelet[2692]: I0213 19:27:04.629943 2692 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:27:04.630717 kubelet[2692]: I0213 19:27:04.630681 2692 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:27:04.634285 kubelet[2692]: I0213 19:27:04.634257 2692 server.go:1264] "Started kubelet" Feb 13 19:27:04.634910 kubelet[2692]: I0213 19:27:04.634857 2692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:27:04.635446 kubelet[2692]: I0213 19:27:04.635159 2692 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:27:04.635446 kubelet[2692]: I0213 19:27:04.635202 2692 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:27:04.636408 kubelet[2692]: I0213 19:27:04.636376 2692 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:27:04.644613 kubelet[2692]: E0213 19:27:04.644578 2692 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:27:04.644947 kubelet[2692]: I0213 19:27:04.644926 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:27:04.646631 kubelet[2692]: I0213 19:27:04.646609 2692 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:27:04.647240 kubelet[2692]: I0213 19:27:04.647226 2692 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:27:04.647575 kubelet[2692]: I0213 19:27:04.647557 2692 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:27:04.647802 kubelet[2692]: I0213 19:27:04.647782 2692 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:27:04.648038 kubelet[2692]: I0213 19:27:04.648017 2692 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:27:04.659877 kubelet[2692]: I0213 19:27:04.659381 2692 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:27:04.673544 kubelet[2692]: I0213 19:27:04.673454 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:27:04.674345 kubelet[2692]: I0213 19:27:04.674315 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:27:04.674345 kubelet[2692]: I0213 19:27:04.674351 2692 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:27:04.674445 kubelet[2692]: I0213 19:27:04.674379 2692 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:27:04.674445 kubelet[2692]: E0213 19:27:04.674421 2692 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:27:04.714580 kubelet[2692]: I0213 19:27:04.714488 2692 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:27:04.714580 kubelet[2692]: I0213 19:27:04.714507 2692 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:27:04.714580 kubelet[2692]: I0213 19:27:04.714528 2692 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:27:04.714718 kubelet[2692]: I0213 19:27:04.714672 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:27:04.714718 kubelet[2692]: I0213 19:27:04.714684 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:27:04.714718 kubelet[2692]: I0213 19:27:04.714704 2692 policy_none.go:49] "None policy: Start" Feb 13 19:27:04.716202 kubelet[2692]: I0213 19:27:04.716165 2692 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:27:04.716202 kubelet[2692]: I0213 19:27:04.716198 2692 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:27:04.716393 kubelet[2692]: I0213 19:27:04.716356 2692 state_mem.go:75] "Updated machine memory state" Feb 13 19:27:04.717467 kubelet[2692]: I0213 19:27:04.717440 2692 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:27:04.717878 kubelet[2692]: I0213 19:27:04.717606 2692 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:27:04.718133 kubelet[2692]: I0213 19:27:04.718103 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:27:04.750716 kubelet[2692]: I0213 19:27:04.750678 2692 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:27:04.756713 kubelet[2692]: I0213 19:27:04.756632 2692 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:27:04.756713 kubelet[2692]: I0213 19:27:04.756718 2692 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:27:04.774760 kubelet[2692]: I0213 19:27:04.774714 2692 topology_manager.go:215] "Topology Admit Handler" podUID="b1e55f972deabafdf582be3ee25a3ded" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:27:04.774886 kubelet[2692]: I0213 19:27:04.774840 2692 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:27:04.774886 kubelet[2692]: I0213 19:27:04.774880 2692 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:27:04.849163 kubelet[2692]: I0213 19:27:04.849113 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1e55f972deabafdf582be3ee25a3ded-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1e55f972deabafdf582be3ee25a3ded\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:27:04.849163 kubelet[2692]: I0213 19:27:04.849157 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:27:04.849163 kubelet[2692]: I0213 19:27:04.849179 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:27:04.849348 kubelet[2692]: I0213 19:27:04.849197 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:27:04.849348 kubelet[2692]: I0213 19:27:04.849217 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:27:04.849348 kubelet[2692]: I0213 19:27:04.849233 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1e55f972deabafdf582be3ee25a3ded-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1e55f972deabafdf582be3ee25a3ded\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:27:04.849348 kubelet[2692]: I0213 19:27:04.849249 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1e55f972deabafdf582be3ee25a3ded-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1e55f972deabafdf582be3ee25a3ded\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:27:04.849348 kubelet[2692]: I0213 19:27:04.849294 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:27:04.849452 kubelet[2692]: I0213 19:27:04.849335 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:27:05.096933 kubelet[2692]: E0213 19:27:05.096710 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:05.096933 kubelet[2692]: E0213 19:27:05.096772 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:05.096933 kubelet[2692]: E0213 19:27:05.096923 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:05.630998 kubelet[2692]: I0213 19:27:05.630948 2692 apiserver.go:52] "Watching apiserver" Feb 13 19:27:05.648512 kubelet[2692]: I0213 19:27:05.648431 2692 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:27:05.691343 kubelet[2692]: E0213 19:27:05.690874 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:05.691343 kubelet[2692]: E0213 19:27:05.691260 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:05.699666 kubelet[2692]: E0213 19:27:05.699208 2692 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:27:05.699908 kubelet[2692]: E0213 19:27:05.699873 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:05.710747 kubelet[2692]: I0213 19:27:05.710665 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.710647658 podStartE2EDuration="1.710647658s" podCreationTimestamp="2025-02-13 19:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:05.7104889 +0000 UTC m=+1.137361486" watchObservedRunningTime="2025-02-13 19:27:05.710647658 +0000 UTC m=+1.137520244" Feb 13 19:27:05.731226 kubelet[2692]: I0213 19:27:05.731128 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.731109536 podStartE2EDuration="1.731109536s" podCreationTimestamp="2025-02-13 19:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:05.721090661 +0000 UTC m=+1.147963247" watchObservedRunningTime="2025-02-13 19:27:05.731109536 +0000 UTC m=+1.157982122" Feb 13 19:27:05.742376 kubelet[2692]: I0213 19:27:05.742260 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.742242997 podStartE2EDuration="1.742242997s" podCreationTimestamp="2025-02-13 19:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:05.731839734 +0000 UTC m=+1.158712320" watchObservedRunningTime="2025-02-13 19:27:05.742242997 +0000 UTC m=+1.169115583" Feb 13 19:27:06.693781 kubelet[2692]: E0213 19:27:06.693734 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:07.694203 kubelet[2692]: E0213 19:27:07.694156 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:09.388220 sudo[1730]: pam_unix(sudo:session): session closed for user root Feb 13 19:27:09.390440 sshd[1723]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:09.395431 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:36800.service: Deactivated successfully. Feb 13 19:27:09.397449 systemd-logind[1501]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:27:09.397535 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:27:09.398577 systemd-logind[1501]: Removed session 7. Feb 13 19:27:11.280503 kubelet[2692]: E0213 19:27:11.280470 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:11.497607 kubelet[2692]: E0213 19:27:11.496906 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:11.700806 kubelet[2692]: E0213 19:27:11.700102 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:11.701376 kubelet[2692]: E0213 19:27:11.701333 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:15.923704 kubelet[2692]: E0213 19:27:15.923551 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:16.708209 kubelet[2692]: E0213 19:27:16.708173 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:17.088556 update_engine[1506]: I20250213 19:27:17.088390 1506 update_attempter.cc:509] Updating boot flags... Feb 13 19:27:17.117056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2786) Feb 13 19:27:17.145010 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2787) Feb 13 19:27:20.089518 kubelet[2692]: I0213 19:27:20.089452 2692 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:27:20.107556 containerd[1523]: time="2025-02-13T19:27:20.107481701Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:27:20.108638 kubelet[2692]: I0213 19:27:20.107838 2692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:27:21.068410 kubelet[2692]: I0213 19:27:21.067642 2692 topology_manager.go:215] "Topology Admit Handler" podUID="b4336139-c2b6-4ba6-992c-0c6d27da9ad7" podNamespace="kube-system" podName="kube-proxy-xwqd5" Feb 13 19:27:21.168322 kubelet[2692]: I0213 19:27:21.168033 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4336139-c2b6-4ba6-992c-0c6d27da9ad7-xtables-lock\") pod \"kube-proxy-xwqd5\" (UID: \"b4336139-c2b6-4ba6-992c-0c6d27da9ad7\") " pod="kube-system/kube-proxy-xwqd5" Feb 13 19:27:21.168322 kubelet[2692]: I0213 19:27:21.168076 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tng8\" (UniqueName: \"kubernetes.io/projected/b4336139-c2b6-4ba6-992c-0c6d27da9ad7-kube-api-access-7tng8\") pod \"kube-proxy-xwqd5\" (UID: \"b4336139-c2b6-4ba6-992c-0c6d27da9ad7\") " pod="kube-system/kube-proxy-xwqd5" Feb 13 19:27:21.168322 kubelet[2692]: I0213 19:27:21.168098 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4336139-c2b6-4ba6-992c-0c6d27da9ad7-kube-proxy\") pod \"kube-proxy-xwqd5\" (UID: \"b4336139-c2b6-4ba6-992c-0c6d27da9ad7\") " pod="kube-system/kube-proxy-xwqd5" Feb 13 19:27:21.168322 kubelet[2692]: I0213 19:27:21.168114 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4336139-c2b6-4ba6-992c-0c6d27da9ad7-lib-modules\") pod \"kube-proxy-xwqd5\" (UID: \"b4336139-c2b6-4ba6-992c-0c6d27da9ad7\") " pod="kube-system/kube-proxy-xwqd5" Feb 13 19:27:21.365427 kubelet[2692]: I0213 19:27:21.365351 2692 topology_manager.go:215] "Topology Admit Handler" podUID="84f686c6-5f81-40e0-b307-ebfa6b9eb1d7" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-jn4m6" Feb 13 19:27:21.471034 kubelet[2692]: I0213 19:27:21.470955 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84f686c6-5f81-40e0-b307-ebfa6b9eb1d7-var-lib-calico\") pod \"tigera-operator-7bc55997bb-jn4m6\" (UID: \"84f686c6-5f81-40e0-b307-ebfa6b9eb1d7\") " pod="tigera-operator/tigera-operator-7bc55997bb-jn4m6" Feb 13 19:27:21.471152 kubelet[2692]: I0213 19:27:21.471063 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbn2v\" (UniqueName: \"kubernetes.io/projected/84f686c6-5f81-40e0-b307-ebfa6b9eb1d7-kube-api-access-sbn2v\") pod \"tigera-operator-7bc55997bb-jn4m6\" (UID: \"84f686c6-5f81-40e0-b307-ebfa6b9eb1d7\") " pod="tigera-operator/tigera-operator-7bc55997bb-jn4m6" Feb 13 19:27:21.670795 kubelet[2692]: E0213 19:27:21.670331 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:21.671033 containerd[1523]: time="2025-02-13T19:27:21.670997516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-jn4m6,Uid:84f686c6-5f81-40e0-b307-ebfa6b9eb1d7,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:27:21.671305 containerd[1523]: time="2025-02-13T19:27:21.671141346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwqd5,Uid:b4336139-c2b6-4ba6-992c-0c6d27da9ad7,Namespace:kube-system,Attempt:0,}" Feb 13 19:27:21.696845 containerd[1523]: time="2025-02-13T19:27:21.696560800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:21.696845 containerd[1523]: time="2025-02-13T19:27:21.696606369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:21.696845 containerd[1523]: time="2025-02-13T19:27:21.696617252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:21.696845 containerd[1523]: time="2025-02-13T19:27:21.696697348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:21.701128 containerd[1523]: time="2025-02-13T19:27:21.700238452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:21.701128 containerd[1523]: time="2025-02-13T19:27:21.701081548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:21.701128 containerd[1523]: time="2025-02-13T19:27:21.701093311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:21.701457 containerd[1523]: time="2025-02-13T19:27:21.701181929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:21.748429 containerd[1523]: time="2025-02-13T19:27:21.748365070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwqd5,Uid:b4336139-c2b6-4ba6-992c-0c6d27da9ad7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a368b5718419c28570d13899ae77abb90bc9f008b5398a79968a03b13c82d1\"" Feb 13 19:27:21.751738 kubelet[2692]: E0213 19:27:21.751661 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:21.756239 containerd[1523]: time="2025-02-13T19:27:21.756194033Z" level=info msg="CreateContainer within sandbox \"b3a368b5718419c28570d13899ae77abb90bc9f008b5398a79968a03b13c82d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:27:21.757733 containerd[1523]: time="2025-02-13T19:27:21.757703589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-jn4m6,Uid:84f686c6-5f81-40e0-b307-ebfa6b9eb1d7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"87eaa38182cda7bad4e7ffbe95cc119e4350d453ed8b1fde9c5c9b9e2c2dc87c\"" Feb 13 19:27:21.760132 containerd[1523]: time="2025-02-13T19:27:21.760093611Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:27:21.774968 containerd[1523]: time="2025-02-13T19:27:21.774844826Z" level=info msg="CreateContainer within sandbox \"b3a368b5718419c28570d13899ae77abb90bc9f008b5398a79968a03b13c82d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eaf4b4cf2bc550f92233e5be5a683c45711a5cf93f3dd91585ac580bf68fef7b\"" Feb 13 19:27:21.777667 containerd[1523]: time="2025-02-13T19:27:21.777622289Z" level=info msg="StartContainer for \"eaf4b4cf2bc550f92233e5be5a683c45711a5cf93f3dd91585ac580bf68fef7b\"" Feb 13 19:27:21.831588 containerd[1523]: time="2025-02-13T19:27:21.831548204Z" level=info msg="StartContainer for \"eaf4b4cf2bc550f92233e5be5a683c45711a5cf93f3dd91585ac580bf68fef7b\" returns successfully" Feb 13 19:27:22.720243 kubelet[2692]: E0213 19:27:22.720165 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:22.889178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397717917.mount: Deactivated successfully. Feb 13 19:27:23.178905 containerd[1523]: time="2025-02-13T19:27:23.178816573Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:23.179590 containerd[1523]: time="2025-02-13T19:27:23.179550993Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 19:27:23.180138 containerd[1523]: time="2025-02-13T19:27:23.180083575Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:23.182404 containerd[1523]: time="2025-02-13T19:27:23.182364330Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:23.183663 containerd[1523]: time="2025-02-13T19:27:23.183505668Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.423196252s" Feb 13 19:27:23.183663 containerd[1523]: time="2025-02-13T19:27:23.183547716Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 19:27:23.187666 containerd[1523]: time="2025-02-13T19:27:23.187523956Z" level=info msg="CreateContainer within sandbox \"87eaa38182cda7bad4e7ffbe95cc119e4350d453ed8b1fde9c5c9b9e2c2dc87c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:27:23.202081 containerd[1523]: time="2025-02-13T19:27:23.202037769Z" level=info msg="CreateContainer within sandbox \"87eaa38182cda7bad4e7ffbe95cc119e4350d453ed8b1fde9c5c9b9e2c2dc87c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3d948b6c36009e1ebd17098e4e9b7c3b7c511bf6e269bf819699a733b036c4b9\"" Feb 13 19:27:23.202992 containerd[1523]: time="2025-02-13T19:27:23.202612998Z" level=info msg="StartContainer for \"3d948b6c36009e1ebd17098e4e9b7c3b7c511bf6e269bf819699a733b036c4b9\"" Feb 13 19:27:23.264699 containerd[1523]: time="2025-02-13T19:27:23.263747957Z" level=info msg="StartContainer for \"3d948b6c36009e1ebd17098e4e9b7c3b7c511bf6e269bf819699a733b036c4b9\" returns successfully" Feb 13 19:27:23.763571 kubelet[2692]: I0213 19:27:23.763498 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xwqd5" podStartSLOduration=2.763479819 podStartE2EDuration="2.763479819s" podCreationTimestamp="2025-02-13 19:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:22.729429857 +0000 UTC m=+18.156302403" watchObservedRunningTime="2025-02-13 19:27:23.763479819 +0000 UTC m=+19.190352365" Feb 13 19:27:24.695648 kubelet[2692]: I0213 19:27:24.695019 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-jn4m6" podStartSLOduration=2.267389963 podStartE2EDuration="3.694959201s" podCreationTimestamp="2025-02-13 19:27:21 +0000 UTC" firstStartedPulling="2025-02-13 19:27:21.758669152 +0000 UTC m=+17.185541698" lastFinishedPulling="2025-02-13 19:27:23.18623835 +0000 UTC m=+18.613110936" observedRunningTime="2025-02-13 19:27:23.763685978 +0000 UTC m=+19.190558564" watchObservedRunningTime="2025-02-13 19:27:24.694959201 +0000 UTC m=+20.121831787" Feb 13 19:27:27.493711 kubelet[2692]: I0213 19:27:27.493601 2692 topology_manager.go:215] "Topology Admit Handler" podUID="202483cc-9683-4539-8c84-1ea58e0d5226" podNamespace="calico-system" podName="calico-typha-8587784bf6-zljpq" Feb 13 19:27:27.518428 kubelet[2692]: I0213 19:27:27.515144 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99z9\" (UniqueName: \"kubernetes.io/projected/202483cc-9683-4539-8c84-1ea58e0d5226-kube-api-access-h99z9\") pod \"calico-typha-8587784bf6-zljpq\" (UID: \"202483cc-9683-4539-8c84-1ea58e0d5226\") " pod="calico-system/calico-typha-8587784bf6-zljpq" Feb 13 19:27:27.518428 kubelet[2692]: I0213 19:27:27.515189 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/202483cc-9683-4539-8c84-1ea58e0d5226-tigera-ca-bundle\") pod \"calico-typha-8587784bf6-zljpq\" (UID: \"202483cc-9683-4539-8c84-1ea58e0d5226\") " pod="calico-system/calico-typha-8587784bf6-zljpq" Feb 13 19:27:27.518428 kubelet[2692]: I0213 19:27:27.515206 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/202483cc-9683-4539-8c84-1ea58e0d5226-typha-certs\") pod \"calico-typha-8587784bf6-zljpq\" (UID: \"202483cc-9683-4539-8c84-1ea58e0d5226\") " pod="calico-system/calico-typha-8587784bf6-zljpq" Feb 13 19:27:27.695967 kubelet[2692]: I0213 19:27:27.692927 2692 topology_manager.go:215] "Topology Admit Handler" podUID="d9f16571-448f-4e21-bb4a-a844bf4d16db" podNamespace="calico-system" podName="calico-node-2jvbb" Feb 13 19:27:27.717085 kubelet[2692]: I0213 19:27:27.716671 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-cni-bin-dir\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717085 kubelet[2692]: I0213 19:27:27.716715 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-cni-log-dir\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717085 kubelet[2692]: I0213 19:27:27.716735 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-var-run-calico\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717085 kubelet[2692]: I0213 19:27:27.716753 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-lib-modules\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717085 kubelet[2692]: I0213 19:27:27.716771 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-policysync\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717331 kubelet[2692]: I0213 19:27:27.716787 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9f16571-448f-4e21-bb4a-a844bf4d16db-node-certs\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717331 kubelet[2692]: I0213 19:27:27.716801 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-var-lib-calico\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717331 kubelet[2692]: I0213 19:27:27.716816 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlcg6\" (UniqueName: \"kubernetes.io/projected/d9f16571-448f-4e21-bb4a-a844bf4d16db-kube-api-access-hlcg6\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717331 kubelet[2692]: I0213 19:27:27.716832 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9f16571-448f-4e21-bb4a-a844bf4d16db-tigera-ca-bundle\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717331 kubelet[2692]: I0213 19:27:27.716848 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-flexvol-driver-host\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717449 kubelet[2692]: I0213 19:27:27.716862 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-cni-net-dir\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.717449 kubelet[2692]: I0213 19:27:27.716877 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9f16571-448f-4e21-bb4a-a844bf4d16db-xtables-lock\") pod \"calico-node-2jvbb\" (UID: \"d9f16571-448f-4e21-bb4a-a844bf4d16db\") " pod="calico-system/calico-node-2jvbb" Feb 13 19:27:27.803328 kubelet[2692]: E0213 19:27:27.803215 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:27.804623 containerd[1523]: time="2025-02-13T19:27:27.804483650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8587784bf6-zljpq,Uid:202483cc-9683-4539-8c84-1ea58e0d5226,Namespace:calico-system,Attempt:0,}" Feb 13 19:27:27.872771 containerd[1523]: time="2025-02-13T19:27:27.872540497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:27.872771 containerd[1523]: time="2025-02-13T19:27:27.872660356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:27.872771 containerd[1523]: time="2025-02-13T19:27:27.872690681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:27.875288 containerd[1523]: time="2025-02-13T19:27:27.872926359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:27.893540 kubelet[2692]: I0213 19:27:27.892934 2692 topology_manager.go:215] "Topology Admit Handler" podUID="046604b2-014e-4614-a6a9-a156d305f1ec" podNamespace="calico-system" podName="csi-node-driver-c7x2p" Feb 13 19:27:27.895280 kubelet[2692]: E0213 19:27:27.895135 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7x2p" podUID="046604b2-014e-4614-a6a9-a156d305f1ec" Feb 13 19:27:27.902319 kubelet[2692]: E0213 19:27:27.902089 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.902319 kubelet[2692]: W0213 19:27:27.902134 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.902319 kubelet[2692]: E0213 19:27:27.902156 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.902494 kubelet[2692]: E0213 19:27:27.902336 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.902494 kubelet[2692]: W0213 19:27:27.902344 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.902494 kubelet[2692]: E0213 19:27:27.902352 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.902569 kubelet[2692]: E0213 19:27:27.902503 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.902569 kubelet[2692]: W0213 19:27:27.902511 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.902569 kubelet[2692]: E0213 19:27:27.902520 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.903807 kubelet[2692]: E0213 19:27:27.902664 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.903807 kubelet[2692]: W0213 19:27:27.902674 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.903807 kubelet[2692]: E0213 19:27:27.902681 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.903807 kubelet[2692]: E0213 19:27:27.902825 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.903807 kubelet[2692]: W0213 19:27:27.902833 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.903807 kubelet[2692]: E0213 19:27:27.902840 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.903807 kubelet[2692]: E0213 19:27:27.902963 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.903807 kubelet[2692]: W0213 19:27:27.902988 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.903807 kubelet[2692]: E0213 19:27:27.902996 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.903807 kubelet[2692]: E0213 19:27:27.903143 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.904087 kubelet[2692]: W0213 19:27:27.903151 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.904087 kubelet[2692]: E0213 19:27:27.903187 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.904087 kubelet[2692]: E0213 19:27:27.903713 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.904087 kubelet[2692]: W0213 19:27:27.903729 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.906129 kubelet[2692]: E0213 19:27:27.903742 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.906129 kubelet[2692]: E0213 19:27:27.905438 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.906129 kubelet[2692]: W0213 19:27:27.905460 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.906129 kubelet[2692]: E0213 19:27:27.905473 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.906129 kubelet[2692]: E0213 19:27:27.905647 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.906129 kubelet[2692]: W0213 19:27:27.905654 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.906129 kubelet[2692]: E0213 19:27:27.905665 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.906129 kubelet[2692]: E0213 19:27:27.905817 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.906129 kubelet[2692]: W0213 19:27:27.905826 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.906129 kubelet[2692]: E0213 19:27:27.905833 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.907325 kubelet[2692]: E0213 19:27:27.906009 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.907325 kubelet[2692]: W0213 19:27:27.906019 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.907325 kubelet[2692]: E0213 19:27:27.906027 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.907325 kubelet[2692]: E0213 19:27:27.906195 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.907325 kubelet[2692]: W0213 19:27:27.906202 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.907325 kubelet[2692]: E0213 19:27:27.906210 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.907325 kubelet[2692]: E0213 19:27:27.906563 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.907325 kubelet[2692]: W0213 19:27:27.906575 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.907325 kubelet[2692]: E0213 19:27:27.906586 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.907325 kubelet[2692]: E0213 19:27:27.906934 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.907799 kubelet[2692]: W0213 19:27:27.906946 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.907799 kubelet[2692]: E0213 19:27:27.906956 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.907799 kubelet[2692]: E0213 19:27:27.907631 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.907799 kubelet[2692]: W0213 19:27:27.907643 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.907799 kubelet[2692]: E0213 19:27:27.907654 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.907932 kubelet[2692]: E0213 19:27:27.907862 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.907932 kubelet[2692]: W0213 19:27:27.907871 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.907932 kubelet[2692]: E0213 19:27:27.907886 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.908097 kubelet[2692]: E0213 19:27:27.908081 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.908097 kubelet[2692]: W0213 19:27:27.908093 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.908177 kubelet[2692]: E0213 19:27:27.908102 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.908391 kubelet[2692]: E0213 19:27:27.908379 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.908391 kubelet[2692]: W0213 19:27:27.908390 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.908453 kubelet[2692]: E0213 19:27:27.908399 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.908691 kubelet[2692]: E0213 19:27:27.908605 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.908691 kubelet[2692]: W0213 19:27:27.908617 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.908691 kubelet[2692]: E0213 19:27:27.908625 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.919046 kubelet[2692]: E0213 19:27:27.919013 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.919046 kubelet[2692]: W0213 19:27:27.919034 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.919046 kubelet[2692]: E0213 19:27:27.919053 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.919262 kubelet[2692]: I0213 19:27:27.919083 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/046604b2-014e-4614-a6a9-a156d305f1ec-varrun\") pod \"csi-node-driver-c7x2p\" (UID: \"046604b2-014e-4614-a6a9-a156d305f1ec\") " pod="calico-system/csi-node-driver-c7x2p" Feb 13 19:27:27.919338 kubelet[2692]: E0213 19:27:27.919308 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.919338 kubelet[2692]: W0213 19:27:27.919322 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.919338 kubelet[2692]: E0213 19:27:27.919335 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.919408 kubelet[2692]: I0213 19:27:27.919375 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mpl\" (UniqueName: \"kubernetes.io/projected/046604b2-014e-4614-a6a9-a156d305f1ec-kube-api-access-b6mpl\") pod \"csi-node-driver-c7x2p\" (UID: \"046604b2-014e-4614-a6a9-a156d305f1ec\") " pod="calico-system/csi-node-driver-c7x2p" Feb 13 19:27:27.920057 kubelet[2692]: E0213 19:27:27.920010 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.920057 kubelet[2692]: W0213 19:27:27.920029 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.920057 kubelet[2692]: E0213 19:27:27.920049 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.920269 kubelet[2692]: E0213 19:27:27.920245 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.920304 kubelet[2692]: W0213 19:27:27.920269 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.920304 kubelet[2692]: E0213 19:27:27.920286 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.920480 kubelet[2692]: E0213 19:27:27.920466 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.920510 kubelet[2692]: W0213 19:27:27.920485 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.920510 kubelet[2692]: E0213 19:27:27.920503 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.920549 kubelet[2692]: I0213 19:27:27.920522 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/046604b2-014e-4614-a6a9-a156d305f1ec-kubelet-dir\") pod \"csi-node-driver-c7x2p\" (UID: \"046604b2-014e-4614-a6a9-a156d305f1ec\") " pod="calico-system/csi-node-driver-c7x2p" Feb 13 19:27:27.920728 kubelet[2692]: E0213 19:27:27.920702 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.920728 kubelet[2692]: W0213 19:27:27.920718 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.920792 kubelet[2692]: E0213 19:27:27.920778 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.920819 kubelet[2692]: I0213 19:27:27.920804 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/046604b2-014e-4614-a6a9-a156d305f1ec-socket-dir\") pod \"csi-node-driver-c7x2p\" (UID: \"046604b2-014e-4614-a6a9-a156d305f1ec\") " pod="calico-system/csi-node-driver-c7x2p" Feb 13 19:27:27.921002 kubelet[2692]: E0213 19:27:27.920988 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.921002 kubelet[2692]: W0213 19:27:27.921001 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.921179 kubelet[2692]: E0213 19:27:27.921099 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.921227 kubelet[2692]: E0213 19:27:27.921200 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.921227 kubelet[2692]: W0213 19:27:27.921208 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.921227 kubelet[2692]: E0213 19:27:27.921220 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.921392 kubelet[2692]: E0213 19:27:27.921378 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.921392 kubelet[2692]: W0213 19:27:27.921390 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.921439 kubelet[2692]: E0213 19:27:27.921398 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.921439 kubelet[2692]: I0213 19:27:27.921414 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/046604b2-014e-4614-a6a9-a156d305f1ec-registration-dir\") pod \"csi-node-driver-c7x2p\" (UID: \"046604b2-014e-4614-a6a9-a156d305f1ec\") " pod="calico-system/csi-node-driver-c7x2p" Feb 13 19:27:27.921616 kubelet[2692]: E0213 19:27:27.921600 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.921616 kubelet[2692]: W0213 19:27:27.921614 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.921672 kubelet[2692]: E0213 19:27:27.921624 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.922084 kubelet[2692]: E0213 19:27:27.921791 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.922084 kubelet[2692]: W0213 19:27:27.921803 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.922084 kubelet[2692]: E0213 19:27:27.921811 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.922084 kubelet[2692]: E0213 19:27:27.922042 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.922084 kubelet[2692]: W0213 19:27:27.922052 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.922084 kubelet[2692]: E0213 19:27:27.922083 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.922268 kubelet[2692]: E0213 19:27:27.922257 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.922268 kubelet[2692]: W0213 19:27:27.922266 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.922323 kubelet[2692]: E0213 19:27:27.922275 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.922662 kubelet[2692]: E0213 19:27:27.922646 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.922662 kubelet[2692]: W0213 19:27:27.922660 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.922735 kubelet[2692]: E0213 19:27:27.922669 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.922842 kubelet[2692]: E0213 19:27:27.922830 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:27.922842 kubelet[2692]: W0213 19:27:27.922841 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:27.922901 kubelet[2692]: E0213 19:27:27.922850 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:27.937419 containerd[1523]: time="2025-02-13T19:27:27.937379309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8587784bf6-zljpq,Uid:202483cc-9683-4539-8c84-1ea58e0d5226,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfd1bb1d6af0203d5f542467893463682fa59914a9d79915028cc835a07d6ede\"" Feb 13 19:27:27.938109 kubelet[2692]: E0213 19:27:27.938056 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:27.942002 containerd[1523]: time="2025-02-13T19:27:27.940177557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:27:28.004318 kubelet[2692]: E0213 19:27:28.004274 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:28.004824 containerd[1523]: time="2025-02-13T19:27:28.004788428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2jvbb,Uid:d9f16571-448f-4e21-bb4a-a844bf4d16db,Namespace:calico-system,Attempt:0,}" Feb 13 19:27:28.022849 kubelet[2692]: E0213 19:27:28.022797 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.022849 kubelet[2692]: W0213 19:27:28.022838 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.022849 kubelet[2692]: E0213 19:27:28.022861 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.023116 kubelet[2692]: E0213 19:27:28.023091 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.023116 kubelet[2692]: W0213 19:27:28.023108 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.023223 kubelet[2692]: E0213 19:27:28.023130 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.023375 kubelet[2692]: E0213 19:27:28.023347 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.023405 kubelet[2692]: W0213 19:27:28.023368 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.023426 kubelet[2692]: E0213 19:27:28.023404 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.023636 kubelet[2692]: E0213 19:27:28.023616 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.023636 kubelet[2692]: W0213 19:27:28.023635 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.023698 kubelet[2692]: E0213 19:27:28.023656 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.023865 kubelet[2692]: E0213 19:27:28.023850 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.023865 kubelet[2692]: W0213 19:27:28.023864 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.023920 kubelet[2692]: E0213 19:27:28.023880 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.024155 kubelet[2692]: E0213 19:27:28.024127 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.024155 kubelet[2692]: W0213 19:27:28.024141 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.024228 kubelet[2692]: E0213 19:27:28.024157 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.024491 kubelet[2692]: E0213 19:27:28.024476 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.024491 kubelet[2692]: W0213 19:27:28.024489 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.024550 kubelet[2692]: E0213 19:27:28.024521 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.026602 kubelet[2692]: E0213 19:27:28.025121 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.026602 kubelet[2692]: W0213 19:27:28.025140 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.026602 kubelet[2692]: E0213 19:27:28.025400 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.026602 kubelet[2692]: E0213 19:27:28.025628 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.026602 kubelet[2692]: W0213 19:27:28.025639 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.026602 kubelet[2692]: E0213 19:27:28.025688 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.026602 kubelet[2692]: E0213 19:27:28.025862 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.026602 kubelet[2692]: W0213 19:27:28.025872 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.026602 kubelet[2692]: E0213 19:27:28.025902 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.026602 kubelet[2692]: E0213 19:27:28.026224 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.026904 kubelet[2692]: W0213 19:27:28.026233 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.026904 kubelet[2692]: E0213 19:27:28.026264 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.026904 kubelet[2692]: E0213 19:27:28.026396 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.026904 kubelet[2692]: W0213 19:27:28.026403 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.026904 kubelet[2692]: E0213 19:27:28.026442 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.026904 kubelet[2692]: E0213 19:27:28.026563 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.026904 kubelet[2692]: W0213 19:27:28.026571 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.026904 kubelet[2692]: E0213 19:27:28.026637 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.026904 kubelet[2692]: E0213 19:27:28.026765 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.026904 kubelet[2692]: W0213 19:27:28.026773 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.027134 kubelet[2692]: E0213 19:27:28.026788 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.027134 kubelet[2692]: E0213 19:27:28.027005 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.027134 kubelet[2692]: W0213 19:27:28.027015 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.027134 kubelet[2692]: E0213 19:27:28.027029 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.027216 kubelet[2692]: E0213 19:27:28.027175 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.027216 kubelet[2692]: W0213 19:27:28.027183 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.027216 kubelet[2692]: E0213 19:27:28.027198 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.027404 kubelet[2692]: E0213 19:27:28.027383 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.027404 kubelet[2692]: W0213 19:27:28.027397 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.027479 kubelet[2692]: E0213 19:27:28.027409 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.027597 kubelet[2692]: E0213 19:27:28.027580 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.027597 kubelet[2692]: W0213 19:27:28.027593 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.027669 kubelet[2692]: E0213 19:27:28.027645 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.027738 kubelet[2692]: E0213 19:27:28.027727 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.027738 kubelet[2692]: W0213 19:27:28.027737 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.027788 kubelet[2692]: E0213 19:27:28.027762 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.028655 kubelet[2692]: E0213 19:27:28.028623 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.028655 kubelet[2692]: W0213 19:27:28.028640 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.028752 kubelet[2692]: E0213 19:27:28.028678 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.028851 kubelet[2692]: E0213 19:27:28.028832 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.028851 kubelet[2692]: W0213 19:27:28.028844 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.028901 kubelet[2692]: E0213 19:27:28.028886 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.029324 kubelet[2692]: E0213 19:27:28.029288 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.029324 kubelet[2692]: W0213 19:27:28.029305 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.029409 kubelet[2692]: E0213 19:27:28.029356 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.029700 kubelet[2692]: E0213 19:27:28.029673 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.029700 kubelet[2692]: W0213 19:27:28.029688 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.029762 kubelet[2692]: E0213 19:27:28.029709 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.030170 kubelet[2692]: E0213 19:27:28.030150 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.030170 kubelet[2692]: W0213 19:27:28.030167 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.030269 kubelet[2692]: E0213 19:27:28.030184 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.030395 kubelet[2692]: E0213 19:27:28.030378 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.030395 kubelet[2692]: W0213 19:27:28.030392 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.030465 kubelet[2692]: E0213 19:27:28.030407 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.033029 containerd[1523]: time="2025-02-13T19:27:28.032825849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:28.033029 containerd[1523]: time="2025-02-13T19:27:28.032906461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:28.033029 containerd[1523]: time="2025-02-13T19:27:28.032917543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:28.033029 containerd[1523]: time="2025-02-13T19:27:28.033041322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:28.039585 kubelet[2692]: E0213 19:27:28.039562 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:28.039585 kubelet[2692]: W0213 19:27:28.039583 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:28.039719 kubelet[2692]: E0213 19:27:28.039601 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:28.068262 containerd[1523]: time="2025-02-13T19:27:28.068154467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2jvbb,Uid:d9f16571-448f-4e21-bb4a-a844bf4d16db,Namespace:calico-system,Attempt:0,} returns sandbox id \"c63a16504868d7008f177beaba4580b4943ef6486e34921bb91b3e0e1db567cd\"" Feb 13 19:27:28.068941 kubelet[2692]: E0213 19:27:28.068909 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:28.963243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177842811.mount: Deactivated successfully. Feb 13 19:27:29.240078 containerd[1523]: time="2025-02-13T19:27:29.239953046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:29.240662 containerd[1523]: time="2025-02-13T19:27:29.240625065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 19:27:29.241821 containerd[1523]: time="2025-02-13T19:27:29.241788516Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:29.243630 containerd[1523]: time="2025-02-13T19:27:29.243604184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:29.245069 containerd[1523]: time="2025-02-13T19:27:29.245036955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.304819391s" Feb 13 19:27:29.245132 containerd[1523]: time="2025-02-13T19:27:29.245069959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 19:27:29.246004 containerd[1523]: time="2025-02-13T19:27:29.245870317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:27:29.261089 containerd[1523]: time="2025-02-13T19:27:29.261049112Z" level=info msg="CreateContainer within sandbox \"cfd1bb1d6af0203d5f542467893463682fa59914a9d79915028cc835a07d6ede\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:27:29.269337 containerd[1523]: time="2025-02-13T19:27:29.269274923Z" level=info msg="CreateContainer within sandbox \"cfd1bb1d6af0203d5f542467893463682fa59914a9d79915028cc835a07d6ede\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"adef3b48b07cb09e2f01dd8d7fe0b84bfb16e4be66391dbd7a8c6e70c8b82dde\"" Feb 13 19:27:29.269995 containerd[1523]: time="2025-02-13T19:27:29.269843046Z" level=info msg="StartContainer for \"adef3b48b07cb09e2f01dd8d7fe0b84bfb16e4be66391dbd7a8c6e70c8b82dde\"" Feb 13 19:27:29.330113 containerd[1523]: time="2025-02-13T19:27:29.330061551Z" level=info msg="StartContainer for \"adef3b48b07cb09e2f01dd8d7fe0b84bfb16e4be66391dbd7a8c6e70c8b82dde\" returns successfully" Feb 13 19:27:29.675588 kubelet[2692]: E0213 19:27:29.675535 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7x2p" podUID="046604b2-014e-4614-a6a9-a156d305f1ec" Feb 13 19:27:29.743876 kubelet[2692]: E0213 19:27:29.742647 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:29.753549 kubelet[2692]: I0213 19:27:29.753476 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8587784bf6-zljpq" podStartSLOduration=1.446636134 podStartE2EDuration="2.753458517s" podCreationTimestamp="2025-02-13 19:27:27 +0000 UTC" firstStartedPulling="2025-02-13 19:27:27.938900393 +0000 UTC m=+23.365772979" lastFinishedPulling="2025-02-13 19:27:29.245722776 +0000 UTC m=+24.672595362" observedRunningTime="2025-02-13 19:27:29.753097344 +0000 UTC m=+25.179969890" watchObservedRunningTime="2025-02-13 19:27:29.753458517 +0000 UTC m=+25.180331063" Feb 13 19:27:29.821879 kubelet[2692]: E0213 19:27:29.821838 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.821879 kubelet[2692]: W0213 19:27:29.821866 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.821879 kubelet[2692]: E0213 19:27:29.821885 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.822103 kubelet[2692]: E0213 19:27:29.822089 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.822103 kubelet[2692]: W0213 19:27:29.822101 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.822164 kubelet[2692]: E0213 19:27:29.822110 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.822268 kubelet[2692]: E0213 19:27:29.822257 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.822268 kubelet[2692]: W0213 19:27:29.822267 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.822320 kubelet[2692]: E0213 19:27:29.822275 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.822439 kubelet[2692]: E0213 19:27:29.822420 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.822439 kubelet[2692]: W0213 19:27:29.822431 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.822439 kubelet[2692]: E0213 19:27:29.822438 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.822586 kubelet[2692]: E0213 19:27:29.822569 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.822586 kubelet[2692]: W0213 19:27:29.822579 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.822635 kubelet[2692]: E0213 19:27:29.822587 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.822709 kubelet[2692]: E0213 19:27:29.822698 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.822709 kubelet[2692]: W0213 19:27:29.822708 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.822766 kubelet[2692]: E0213 19:27:29.822715 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.822848 kubelet[2692]: E0213 19:27:29.822837 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.822848 kubelet[2692]: W0213 19:27:29.822847 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.822894 kubelet[2692]: E0213 19:27:29.822854 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.822975 kubelet[2692]: E0213 19:27:29.822963 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.823002 kubelet[2692]: W0213 19:27:29.822985 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.823002 kubelet[2692]: E0213 19:27:29.822993 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.823134 kubelet[2692]: E0213 19:27:29.823122 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.823158 kubelet[2692]: W0213 19:27:29.823133 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.823158 kubelet[2692]: E0213 19:27:29.823140 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.823273 kubelet[2692]: E0213 19:27:29.823264 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.823294 kubelet[2692]: W0213 19:27:29.823273 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.823294 kubelet[2692]: E0213 19:27:29.823280 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.823411 kubelet[2692]: E0213 19:27:29.823402 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.823435 kubelet[2692]: W0213 19:27:29.823411 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.823435 kubelet[2692]: E0213 19:27:29.823418 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.823568 kubelet[2692]: E0213 19:27:29.823557 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.823568 kubelet[2692]: W0213 19:27:29.823566 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.823620 kubelet[2692]: E0213 19:27:29.823574 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.823726 kubelet[2692]: E0213 19:27:29.823708 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.823726 kubelet[2692]: W0213 19:27:29.823724 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.823831 kubelet[2692]: E0213 19:27:29.823736 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.823871 kubelet[2692]: E0213 19:27:29.823859 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.823871 kubelet[2692]: W0213 19:27:29.823869 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.823917 kubelet[2692]: E0213 19:27:29.823877 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.824019 kubelet[2692]: E0213 19:27:29.824008 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.824045 kubelet[2692]: W0213 19:27:29.824018 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.824045 kubelet[2692]: E0213 19:27:29.824026 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.839470 kubelet[2692]: E0213 19:27:29.839438 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.839470 kubelet[2692]: W0213 19:27:29.839461 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.839598 kubelet[2692]: E0213 19:27:29.839478 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.839718 kubelet[2692]: E0213 19:27:29.839704 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.839718 kubelet[2692]: W0213 19:27:29.839716 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.839790 kubelet[2692]: E0213 19:27:29.839736 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.839906 kubelet[2692]: E0213 19:27:29.839894 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.839906 kubelet[2692]: W0213 19:27:29.839905 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.840014 kubelet[2692]: E0213 19:27:29.839917 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.840137 kubelet[2692]: E0213 19:27:29.840124 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.840137 kubelet[2692]: W0213 19:27:29.840135 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.840193 kubelet[2692]: E0213 19:27:29.840149 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.840317 kubelet[2692]: E0213 19:27:29.840296 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.840317 kubelet[2692]: W0213 19:27:29.840307 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.840376 kubelet[2692]: E0213 19:27:29.840317 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.840471 kubelet[2692]: E0213 19:27:29.840460 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.840471 kubelet[2692]: W0213 19:27:29.840469 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.840512 kubelet[2692]: E0213 19:27:29.840477 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.840644 kubelet[2692]: E0213 19:27:29.840626 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.840644 kubelet[2692]: W0213 19:27:29.840637 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.840695 kubelet[2692]: E0213 19:27:29.840649 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.840944 kubelet[2692]: E0213 19:27:29.840923 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.840944 kubelet[2692]: W0213 19:27:29.840942 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.841028 kubelet[2692]: E0213 19:27:29.840963 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.841149 kubelet[2692]: E0213 19:27:29.841134 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.841149 kubelet[2692]: W0213 19:27:29.841147 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.841217 kubelet[2692]: E0213 19:27:29.841160 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.841337 kubelet[2692]: E0213 19:27:29.841327 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.841337 kubelet[2692]: W0213 19:27:29.841337 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.841397 kubelet[2692]: E0213 19:27:29.841346 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.841488 kubelet[2692]: E0213 19:27:29.841479 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.841488 kubelet[2692]: W0213 19:27:29.841488 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.841543 kubelet[2692]: E0213 19:27:29.841499 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.841987 kubelet[2692]: E0213 19:27:29.841954 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.842032 kubelet[2692]: W0213 19:27:29.841982 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.842032 kubelet[2692]: E0213 19:27:29.842003 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.842508 kubelet[2692]: E0213 19:27:29.842493 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.842538 kubelet[2692]: W0213 19:27:29.842508 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.842627 kubelet[2692]: E0213 19:27:29.842587 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.842766 kubelet[2692]: E0213 19:27:29.842754 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.842799 kubelet[2692]: W0213 19:27:29.842770 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.842896 kubelet[2692]: E0213 19:27:29.842868 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.843010 kubelet[2692]: E0213 19:27:29.842997 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.843010 kubelet[2692]: W0213 19:27:29.843010 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.843076 kubelet[2692]: E0213 19:27:29.843020 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.843170 kubelet[2692]: E0213 19:27:29.843161 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.843197 kubelet[2692]: W0213 19:27:29.843170 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.843197 kubelet[2692]: E0213 19:27:29.843179 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.843376 kubelet[2692]: E0213 19:27:29.843366 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.843376 kubelet[2692]: W0213 19:27:29.843375 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.843437 kubelet[2692]: E0213 19:27:29.843382 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:29.843695 kubelet[2692]: E0213 19:27:29.843684 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:27:29.843695 kubelet[2692]: W0213 19:27:29.843695 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:27:29.843760 kubelet[2692]: E0213 19:27:29.843703 2692 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:27:30.227908 containerd[1523]: time="2025-02-13T19:27:30.227785348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:30.229817 containerd[1523]: time="2025-02-13T19:27:30.228218329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 19:27:30.229817 containerd[1523]: time="2025-02-13T19:27:30.229028283Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:30.231230 containerd[1523]: time="2025-02-13T19:27:30.231192749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:30.231846 containerd[1523]: time="2025-02-13T19:27:30.231807836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 985.905634ms" Feb 13 19:27:30.231879 containerd[1523]: time="2025-02-13T19:27:30.231844682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:27:30.236832 containerd[1523]: time="2025-02-13T19:27:30.236794542Z" level=info msg="CreateContainer within sandbox \"c63a16504868d7008f177beaba4580b4943ef6486e34921bb91b3e0e1db567cd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:27:30.273869 containerd[1523]: time="2025-02-13T19:27:30.273823018Z" level=info msg="CreateContainer within sandbox \"c63a16504868d7008f177beaba4580b4943ef6486e34921bb91b3e0e1db567cd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"186967d1561b67fc5e8206a0ded78d0b026d328e644a69541ceed267d3e0a6d9\"" Feb 13 19:27:30.274536 containerd[1523]: time="2025-02-13T19:27:30.274498033Z" level=info msg="StartContainer for \"186967d1561b67fc5e8206a0ded78d0b026d328e644a69541ceed267d3e0a6d9\"" Feb 13 19:27:30.322613 containerd[1523]: time="2025-02-13T19:27:30.322568071Z" level=info msg="StartContainer for \"186967d1561b67fc5e8206a0ded78d0b026d328e644a69541ceed267d3e0a6d9\" returns successfully" Feb 13 19:27:30.399586 containerd[1523]: time="2025-02-13T19:27:30.396808650Z" level=info msg="shim disconnected" id=186967d1561b67fc5e8206a0ded78d0b026d328e644a69541ceed267d3e0a6d9 namespace=k8s.io Feb 13 19:27:30.399586 containerd[1523]: time="2025-02-13T19:27:30.399578722Z" level=warning msg="cleaning up after shim disconnected" id=186967d1561b67fc5e8206a0ded78d0b026d328e644a69541ceed267d3e0a6d9 namespace=k8s.io Feb 13 19:27:30.399586 containerd[1523]: time="2025-02-13T19:27:30.399593124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:27:30.629148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-186967d1561b67fc5e8206a0ded78d0b026d328e644a69541ceed267d3e0a6d9-rootfs.mount: Deactivated successfully. Feb 13 19:27:30.748492 kubelet[2692]: I0213 19:27:30.747808 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:27:30.748492 kubelet[2692]: E0213 19:27:30.748321 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:30.750011 containerd[1523]: time="2025-02-13T19:27:30.749400672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:27:30.751715 kubelet[2692]: E0213 19:27:30.751661 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:31.674924 kubelet[2692]: E0213 19:27:31.674874 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7x2p" podUID="046604b2-014e-4614-a6a9-a156d305f1ec" Feb 13 19:27:32.985723 containerd[1523]: time="2025-02-13T19:27:32.985674084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:32.989995 containerd[1523]: time="2025-02-13T19:27:32.988997839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:27:32.990663 containerd[1523]: time="2025-02-13T19:27:32.990629212Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:33.002807 containerd[1523]: time="2025-02-13T19:27:33.002574208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:33.003386 containerd[1523]: time="2025-02-13T19:27:33.003345025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.253907069s" Feb 13 19:27:33.003593 containerd[1523]: time="2025-02-13T19:27:33.003487323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:27:33.005598 containerd[1523]: time="2025-02-13T19:27:33.005555744Z" level=info msg="CreateContainer within sandbox \"c63a16504868d7008f177beaba4580b4943ef6486e34921bb91b3e0e1db567cd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:27:33.030945 containerd[1523]: time="2025-02-13T19:27:33.030815890Z" level=info msg="CreateContainer within sandbox \"c63a16504868d7008f177beaba4580b4943ef6486e34921bb91b3e0e1db567cd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f5025f0703ffab747584e7821bcf8c484197b6951dcd39a4d0f69b4b4c9aeeb\"" Feb 13 19:27:33.031372 containerd[1523]: time="2025-02-13T19:27:33.031344077Z" level=info msg="StartContainer for \"1f5025f0703ffab747584e7821bcf8c484197b6951dcd39a4d0f69b4b4c9aeeb\"" Feb 13 19:27:33.159432 containerd[1523]: time="2025-02-13T19:27:33.159386345Z" level=info msg="StartContainer for \"1f5025f0703ffab747584e7821bcf8c484197b6951dcd39a4d0f69b4b4c9aeeb\" returns successfully" Feb 13 19:27:33.641539 containerd[1523]: time="2025-02-13T19:27:33.641342531Z" level=info msg="shim disconnected" id=1f5025f0703ffab747584e7821bcf8c484197b6951dcd39a4d0f69b4b4c9aeeb namespace=k8s.io Feb 13 19:27:33.641539 containerd[1523]: time="2025-02-13T19:27:33.641395617Z" level=warning msg="cleaning up after shim disconnected" id=1f5025f0703ffab747584e7821bcf8c484197b6951dcd39a4d0f69b4b4c9aeeb namespace=k8s.io Feb 13 19:27:33.641539 containerd[1523]: time="2025-02-13T19:27:33.641403698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:27:33.650918 containerd[1523]: time="2025-02-13T19:27:33.650864211Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:27:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:27:33.675610 kubelet[2692]: E0213 19:27:33.675552 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7x2p" podUID="046604b2-014e-4614-a6a9-a156d305f1ec" Feb 13 19:27:33.719912 kubelet[2692]: I0213 19:27:33.719884 2692 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:27:33.750928 kubelet[2692]: I0213 19:27:33.750834 2692 topology_manager.go:215] "Topology Admit Handler" podUID="7fc34767-db27-421e-861b-6bd72627f37b" podNamespace="calico-apiserver" podName="calico-apiserver-554b9784b8-r6m9z" Feb 13 19:27:33.751843 kubelet[2692]: I0213 19:27:33.751017 2692 topology_manager.go:215] "Topology Admit Handler" podUID="d3fc815c-da0d-4681-bd91-c9162f51c3d8" podNamespace="calico-apiserver" podName="calico-apiserver-554b9784b8-gpzrw" Feb 13 19:27:33.751843 kubelet[2692]: I0213 19:27:33.751109 2692 topology_manager.go:215] "Topology Admit Handler" podUID="6023b2dc-e788-490d-952a-daba9fbad29a" podNamespace="calico-system" podName="calico-kube-controllers-cd8b599d7-g4h4n" Feb 13 19:27:33.751843 kubelet[2692]: I0213 19:27:33.751579 2692 topology_manager.go:215] "Topology Admit Handler" podUID="623df41b-21a5-4acd-90d5-1d14fa054355" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hzzls" Feb 13 19:27:33.755905 kubelet[2692]: I0213 19:27:33.755867 2692 topology_manager.go:215] "Topology Admit Handler" podUID="1e9e0e70-1c2f-4d7c-8d2d-f775675262e1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dshsv" Feb 13 19:27:33.763762 kubelet[2692]: E0213 19:27:33.763717 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:33.766050 containerd[1523]: time="2025-02-13T19:27:33.765203152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:27:33.768529 kubelet[2692]: I0213 19:27:33.768488 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3fc815c-da0d-4681-bd91-c9162f51c3d8-calico-apiserver-certs\") pod \"calico-apiserver-554b9784b8-gpzrw\" (UID: \"d3fc815c-da0d-4681-bd91-c9162f51c3d8\") " pod="calico-apiserver/calico-apiserver-554b9784b8-gpzrw" Feb 13 19:27:33.768635 kubelet[2692]: I0213 19:27:33.768550 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxl5r\" (UniqueName: \"kubernetes.io/projected/1e9e0e70-1c2f-4d7c-8d2d-f775675262e1-kube-api-access-jxl5r\") pod \"coredns-7db6d8ff4d-dshsv\" (UID: \"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1\") " pod="kube-system/coredns-7db6d8ff4d-dshsv" Feb 13 19:27:33.768635 kubelet[2692]: I0213 19:27:33.768575 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6023b2dc-e788-490d-952a-daba9fbad29a-tigera-ca-bundle\") pod \"calico-kube-controllers-cd8b599d7-g4h4n\" (UID: \"6023b2dc-e788-490d-952a-daba9fbad29a\") " pod="calico-system/calico-kube-controllers-cd8b599d7-g4h4n" Feb 13 19:27:33.768635 kubelet[2692]: I0213 19:27:33.768592 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e9e0e70-1c2f-4d7c-8d2d-f775675262e1-config-volume\") pod \"coredns-7db6d8ff4d-dshsv\" (UID: \"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1\") " pod="kube-system/coredns-7db6d8ff4d-dshsv" Feb 13 19:27:33.768635 kubelet[2692]: I0213 19:27:33.768610 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7fc34767-db27-421e-861b-6bd72627f37b-calico-apiserver-certs\") pod \"calico-apiserver-554b9784b8-r6m9z\" (UID: \"7fc34767-db27-421e-861b-6bd72627f37b\") " pod="calico-apiserver/calico-apiserver-554b9784b8-r6m9z" Feb 13 19:27:33.768635 kubelet[2692]: I0213 19:27:33.768630 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7k6v\" (UniqueName: \"kubernetes.io/projected/7fc34767-db27-421e-861b-6bd72627f37b-kube-api-access-k7k6v\") pod \"calico-apiserver-554b9784b8-r6m9z\" (UID: \"7fc34767-db27-421e-861b-6bd72627f37b\") " pod="calico-apiserver/calico-apiserver-554b9784b8-r6m9z" Feb 13 19:27:33.768759 kubelet[2692]: I0213 19:27:33.768647 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2dq\" (UniqueName: \"kubernetes.io/projected/d3fc815c-da0d-4681-bd91-c9162f51c3d8-kube-api-access-7l2dq\") pod \"calico-apiserver-554b9784b8-gpzrw\" (UID: \"d3fc815c-da0d-4681-bd91-c9162f51c3d8\") " pod="calico-apiserver/calico-apiserver-554b9784b8-gpzrw" Feb 13 19:27:33.768759 kubelet[2692]: I0213 19:27:33.768664 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7d8\" (UniqueName: \"kubernetes.io/projected/623df41b-21a5-4acd-90d5-1d14fa054355-kube-api-access-5x7d8\") pod \"coredns-7db6d8ff4d-hzzls\" (UID: \"623df41b-21a5-4acd-90d5-1d14fa054355\") " pod="kube-system/coredns-7db6d8ff4d-hzzls" Feb 13 19:27:33.768759 kubelet[2692]: I0213 19:27:33.768680 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623df41b-21a5-4acd-90d5-1d14fa054355-config-volume\") pod \"coredns-7db6d8ff4d-hzzls\" (UID: \"623df41b-21a5-4acd-90d5-1d14fa054355\") " pod="kube-system/coredns-7db6d8ff4d-hzzls" Feb 13 19:27:33.768759 kubelet[2692]: I0213 19:27:33.768704 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clshk\" (UniqueName: \"kubernetes.io/projected/6023b2dc-e788-490d-952a-daba9fbad29a-kube-api-access-clshk\") pod \"calico-kube-controllers-cd8b599d7-g4h4n\" (UID: \"6023b2dc-e788-490d-952a-daba9fbad29a\") " pod="calico-system/calico-kube-controllers-cd8b599d7-g4h4n" Feb 13 19:27:34.033192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f5025f0703ffab747584e7821bcf8c484197b6951dcd39a4d0f69b4b4c9aeeb-rootfs.mount: Deactivated successfully. Feb 13 19:27:34.056153 containerd[1523]: time="2025-02-13T19:27:34.056084433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-gpzrw,Uid:d3fc815c-da0d-4681-bd91-c9162f51c3d8,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:27:34.056714 containerd[1523]: time="2025-02-13T19:27:34.056335823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-r6m9z,Uid:7fc34767-db27-421e-861b-6bd72627f37b,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:27:34.059978 kubelet[2692]: E0213 19:27:34.059948 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:34.060409 containerd[1523]: time="2025-02-13T19:27:34.060377315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzzls,Uid:623df41b-21a5-4acd-90d5-1d14fa054355,Namespace:kube-system,Attempt:0,}" Feb 13 19:27:34.063594 containerd[1523]: time="2025-02-13T19:27:34.061094602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8b599d7-g4h4n,Uid:6023b2dc-e788-490d-952a-daba9fbad29a,Namespace:calico-system,Attempt:0,}" Feb 13 19:27:34.063699 kubelet[2692]: E0213 19:27:34.063635 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:34.064094 containerd[1523]: time="2025-02-13T19:27:34.064056643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dshsv,Uid:1e9e0e70-1c2f-4d7c-8d2d-f775675262e1,Namespace:kube-system,Attempt:0,}" Feb 13 19:27:34.526574 containerd[1523]: time="2025-02-13T19:27:34.526501498Z" level=error msg="Failed to destroy network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.529086 containerd[1523]: time="2025-02-13T19:27:34.528883028Z" level=error msg="encountered an error cleaning up failed sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.529262 containerd[1523]: time="2025-02-13T19:27:34.529235590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-r6m9z,Uid:7fc34767-db27-421e-861b-6bd72627f37b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.532441 containerd[1523]: time="2025-02-13T19:27:34.532403896Z" level=error msg="Failed to destroy network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.532766 containerd[1523]: time="2025-02-13T19:27:34.532740817Z" level=error msg="encountered an error cleaning up failed sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.532803 containerd[1523]: time="2025-02-13T19:27:34.532788383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8b599d7-g4h4n,Uid:6023b2dc-e788-490d-952a-daba9fbad29a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.535266 kubelet[2692]: E0213 19:27:34.535216 2692 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.535376 kubelet[2692]: E0213 19:27:34.535314 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-554b9784b8-r6m9z" Feb 13 19:27:34.535376 kubelet[2692]: E0213 19:27:34.535216 2692 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.535428 kubelet[2692]: E0213 19:27:34.535340 2692 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-554b9784b8-r6m9z" Feb 13 19:27:34.535428 kubelet[2692]: E0213 19:27:34.535389 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8b599d7-g4h4n" Feb 13 19:27:34.535428 kubelet[2692]: E0213 19:27:34.535412 2692 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8b599d7-g4h4n" Feb 13 19:27:34.535496 kubelet[2692]: E0213 19:27:34.535433 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-554b9784b8-r6m9z_calico-apiserver(7fc34767-db27-421e-861b-6bd72627f37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-554b9784b8-r6m9z_calico-apiserver(7fc34767-db27-421e-861b-6bd72627f37b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-554b9784b8-r6m9z" podUID="7fc34767-db27-421e-861b-6bd72627f37b" Feb 13 19:27:34.535496 kubelet[2692]: E0213 19:27:34.535447 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd8b599d7-g4h4n_calico-system(6023b2dc-e788-490d-952a-daba9fbad29a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd8b599d7-g4h4n_calico-system(6023b2dc-e788-490d-952a-daba9fbad29a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd8b599d7-g4h4n" podUID="6023b2dc-e788-490d-952a-daba9fbad29a" Feb 13 19:27:34.537206 containerd[1523]: time="2025-02-13T19:27:34.537162795Z" level=error msg="Failed to destroy network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.537692 containerd[1523]: time="2025-02-13T19:27:34.537667016Z" level=error msg="encountered an error cleaning up failed sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.537805 containerd[1523]: time="2025-02-13T19:27:34.537783030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-gpzrw,Uid:d3fc815c-da0d-4681-bd91-c9162f51c3d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.538093 kubelet[2692]: E0213 19:27:34.538065 2692 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.538157 kubelet[2692]: E0213 19:27:34.538109 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-554b9784b8-gpzrw" Feb 13 19:27:34.538157 kubelet[2692]: E0213 19:27:34.538129 2692 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-554b9784b8-gpzrw" Feb 13 19:27:34.538213 kubelet[2692]: E0213 19:27:34.538170 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-554b9784b8-gpzrw_calico-apiserver(d3fc815c-da0d-4681-bd91-c9162f51c3d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-554b9784b8-gpzrw_calico-apiserver(d3fc815c-da0d-4681-bd91-c9162f51c3d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-554b9784b8-gpzrw" podUID="d3fc815c-da0d-4681-bd91-c9162f51c3d8" Feb 13 19:27:34.540650 containerd[1523]: time="2025-02-13T19:27:34.540607574Z" level=error msg="Failed to destroy network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.541051 containerd[1523]: time="2025-02-13T19:27:34.540945415Z" level=error msg="encountered an error cleaning up failed sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.541051 containerd[1523]: time="2025-02-13T19:27:34.541011583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dshsv,Uid:1e9e0e70-1c2f-4d7c-8d2d-f775675262e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.541219 kubelet[2692]: E0213 19:27:34.541183 2692 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.541265 kubelet[2692]: E0213 19:27:34.541238 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dshsv" Feb 13 19:27:34.541265 kubelet[2692]: E0213 19:27:34.541257 2692 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dshsv" Feb 13 19:27:34.541330 kubelet[2692]: E0213 19:27:34.541290 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dshsv_kube-system(1e9e0e70-1c2f-4d7c-8d2d-f775675262e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dshsv_kube-system(1e9e0e70-1c2f-4d7c-8d2d-f775675262e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dshsv" podUID="1e9e0e70-1c2f-4d7c-8d2d-f775675262e1" Feb 13 19:27:34.543264 containerd[1523]: time="2025-02-13T19:27:34.543171846Z" level=error msg="Failed to destroy network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.543583 containerd[1523]: time="2025-02-13T19:27:34.543531169Z" level=error msg="encountered an error cleaning up failed sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.543724 containerd[1523]: time="2025-02-13T19:27:34.543572695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzzls,Uid:623df41b-21a5-4acd-90d5-1d14fa054355,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.544118 kubelet[2692]: E0213 19:27:34.544084 2692 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.544192 kubelet[2692]: E0213 19:27:34.544130 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hzzls" Feb 13 19:27:34.544192 kubelet[2692]: E0213 19:27:34.544147 2692 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hzzls" Feb 13 19:27:34.544257 kubelet[2692]: E0213 19:27:34.544188 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hzzls_kube-system(623df41b-21a5-4acd-90d5-1d14fa054355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hzzls_kube-system(623df41b-21a5-4acd-90d5-1d14fa054355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hzzls" podUID="623df41b-21a5-4acd-90d5-1d14fa054355" Feb 13 19:27:34.766404 kubelet[2692]: I0213 19:27:34.766372 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:27:34.767510 containerd[1523]: time="2025-02-13T19:27:34.767159453Z" level=info msg="StopPodSandbox for \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\"" Feb 13 19:27:34.768045 containerd[1523]: time="2025-02-13T19:27:34.767830295Z" level=info msg="Ensure that sandbox 63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46 in task-service has been cleanup successfully" Feb 13 19:27:34.768951 kubelet[2692]: I0213 19:27:34.768928 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:27:34.771623 kubelet[2692]: I0213 19:27:34.770504 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:27:34.771733 containerd[1523]: time="2025-02-13T19:27:34.771374206Z" level=info msg="StopPodSandbox for \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\"" Feb 13 19:27:34.771733 containerd[1523]: time="2025-02-13T19:27:34.771517983Z" level=info msg="Ensure that sandbox 31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97 in task-service has been cleanup successfully" Feb 13 19:27:34.772694 containerd[1523]: time="2025-02-13T19:27:34.772670484Z" level=info msg="StopPodSandbox for \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\"" Feb 13 19:27:34.772922 containerd[1523]: time="2025-02-13T19:27:34.772899912Z" level=info msg="Ensure that sandbox 7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30 in task-service has been cleanup successfully" Feb 13 19:27:34.776102 kubelet[2692]: I0213 19:27:34.776076 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:27:34.776850 containerd[1523]: time="2025-02-13T19:27:34.776760341Z" level=info msg="StopPodSandbox for \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\"" Feb 13 19:27:34.778210 containerd[1523]: time="2025-02-13T19:27:34.778181874Z" level=info msg="Ensure that sandbox cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527 in task-service has been cleanup successfully" Feb 13 19:27:34.780886 kubelet[2692]: I0213 19:27:34.780757 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:27:34.782495 containerd[1523]: time="2025-02-13T19:27:34.782459955Z" level=info msg="StopPodSandbox for \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\"" Feb 13 19:27:34.782644 containerd[1523]: time="2025-02-13T19:27:34.782624695Z" level=info msg="Ensure that sandbox 197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8 in task-service has been cleanup successfully" Feb 13 19:27:34.829697 containerd[1523]: time="2025-02-13T19:27:34.829646815Z" level=error msg="StopPodSandbox for \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\" failed" error="failed to destroy network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.831573 containerd[1523]: time="2025-02-13T19:27:34.831537485Z" level=error msg="StopPodSandbox for \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\" failed" error="failed to destroy network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.833415 kubelet[2692]: E0213 19:27:34.833366 2692 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:27:34.833499 kubelet[2692]: E0213 19:27:34.833438 2692 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46"} Feb 13 19:27:34.833526 kubelet[2692]: E0213 19:27:34.833499 2692 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3fc815c-da0d-4681-bd91-c9162f51c3d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:27:34.833588 kubelet[2692]: E0213 19:27:34.833521 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3fc815c-da0d-4681-bd91-c9162f51c3d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-554b9784b8-gpzrw" podUID="d3fc815c-da0d-4681-bd91-c9162f51c3d8" Feb 13 19:27:34.833954 kubelet[2692]: E0213 19:27:34.833892 2692 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:27:34.833954 kubelet[2692]: E0213 19:27:34.833945 2692 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97"} Feb 13 19:27:34.834041 kubelet[2692]: E0213 19:27:34.833988 2692 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fc34767-db27-421e-861b-6bd72627f37b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:27:34.834041 kubelet[2692]: E0213 19:27:34.834007 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fc34767-db27-421e-861b-6bd72627f37b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-554b9784b8-r6m9z" podUID="7fc34767-db27-421e-861b-6bd72627f37b" Feb 13 19:27:34.834610 containerd[1523]: time="2025-02-13T19:27:34.834575854Z" level=error msg="StopPodSandbox for \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\" failed" error="failed to destroy network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.834769 kubelet[2692]: E0213 19:27:34.834736 2692 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:27:34.834820 kubelet[2692]: E0213 19:27:34.834776 2692 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527"} Feb 13 19:27:34.834820 kubelet[2692]: E0213 19:27:34.834803 2692 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:27:34.834896 kubelet[2692]: E0213 19:27:34.834819 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dshsv" podUID="1e9e0e70-1c2f-4d7c-8d2d-f775675262e1" Feb 13 19:27:34.839096 containerd[1523]: time="2025-02-13T19:27:34.839058320Z" level=error msg="StopPodSandbox for \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\" failed" error="failed to destroy network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.839301 kubelet[2692]: E0213 19:27:34.839251 2692 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:27:34.839301 kubelet[2692]: E0213 19:27:34.839295 2692 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8"} Feb 13 19:27:34.839382 kubelet[2692]: E0213 19:27:34.839319 2692 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6023b2dc-e788-490d-952a-daba9fbad29a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:27:34.839382 kubelet[2692]: E0213 19:27:34.839337 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6023b2dc-e788-490d-952a-daba9fbad29a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd8b599d7-g4h4n" podUID="6023b2dc-e788-490d-952a-daba9fbad29a" Feb 13 19:27:34.839690 containerd[1523]: time="2025-02-13T19:27:34.839660633Z" level=error msg="StopPodSandbox for \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\" failed" error="failed to destroy network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:34.839858 kubelet[2692]: E0213 19:27:34.839824 2692 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:27:34.839907 kubelet[2692]: E0213 19:27:34.839865 2692 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30"} Feb 13 19:27:34.839907 kubelet[2692]: E0213 19:27:34.839889 2692 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"623df41b-21a5-4acd-90d5-1d14fa054355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:27:34.839990 kubelet[2692]: E0213 19:27:34.839911 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"623df41b-21a5-4acd-90d5-1d14fa054355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hzzls" podUID="623df41b-21a5-4acd-90d5-1d14fa054355" Feb 13 19:27:35.028037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527-shm.mount: Deactivated successfully. Feb 13 19:27:35.028175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30-shm.mount: Deactivated successfully. Feb 13 19:27:35.028254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46-shm.mount: Deactivated successfully. Feb 13 19:27:35.028336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97-shm.mount: Deactivated successfully. Feb 13 19:27:35.028416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8-shm.mount: Deactivated successfully. Feb 13 19:27:35.678429 containerd[1523]: time="2025-02-13T19:27:35.678386700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7x2p,Uid:046604b2-014e-4614-a6a9-a156d305f1ec,Namespace:calico-system,Attempt:0,}" Feb 13 19:27:35.771708 containerd[1523]: time="2025-02-13T19:27:35.771651614Z" level=error msg="Failed to destroy network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:35.772602 containerd[1523]: time="2025-02-13T19:27:35.772541079Z" level=error msg="encountered an error cleaning up failed sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:35.772668 containerd[1523]: time="2025-02-13T19:27:35.772629409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7x2p,Uid:046604b2-014e-4614-a6a9-a156d305f1ec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:35.772906 kubelet[2692]: E0213 19:27:35.772866 2692 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:35.773223 kubelet[2692]: E0213 19:27:35.772928 2692 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7x2p" Feb 13 19:27:35.773223 kubelet[2692]: E0213 19:27:35.772948 2692 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7x2p" Feb 13 19:27:35.773223 kubelet[2692]: E0213 19:27:35.772996 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7x2p_calico-system(046604b2-014e-4614-a6a9-a156d305f1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7x2p_calico-system(046604b2-014e-4614-a6a9-a156d305f1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7x2p" podUID="046604b2-014e-4614-a6a9-a156d305f1ec" Feb 13 19:27:35.775431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8-shm.mount: Deactivated successfully. Feb 13 19:27:35.786858 kubelet[2692]: I0213 19:27:35.786698 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:27:35.788697 containerd[1523]: time="2025-02-13T19:27:35.788493633Z" level=info msg="StopPodSandbox for \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\"" Feb 13 19:27:35.788697 containerd[1523]: time="2025-02-13T19:27:35.788655252Z" level=info msg="Ensure that sandbox 6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8 in task-service has been cleanup successfully" Feb 13 19:27:35.831650 containerd[1523]: time="2025-02-13T19:27:35.831415314Z" level=error msg="StopPodSandbox for \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\" failed" error="failed to destroy network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:27:35.832414 kubelet[2692]: E0213 19:27:35.832369 2692 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:27:35.832507 kubelet[2692]: E0213 19:27:35.832422 2692 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8"} Feb 13 19:27:35.832507 kubelet[2692]: E0213 19:27:35.832497 2692 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"046604b2-014e-4614-a6a9-a156d305f1ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:27:35.832600 kubelet[2692]: E0213 19:27:35.832520 2692 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"046604b2-014e-4614-a6a9-a156d305f1ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7x2p" podUID="046604b2-014e-4614-a6a9-a156d305f1ec" Feb 13 19:27:35.833165 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:54678.service - OpenSSH per-connection server daemon (10.0.0.1:54678). Feb 13 19:27:35.887079 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 54678 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:35.889718 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:35.894268 systemd-logind[1501]: New session 8 of user core. Feb 13 19:27:35.900475 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:27:36.072620 sshd[3841]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:36.077398 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:54678.service: Deactivated successfully. Feb 13 19:27:36.080065 systemd-logind[1501]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:27:36.080289 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:27:36.081933 systemd-logind[1501]: Removed session 8. Feb 13 19:27:37.453362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330127992.mount: Deactivated successfully. Feb 13 19:27:37.617675 containerd[1523]: time="2025-02-13T19:27:37.617187950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:37.618341 containerd[1523]: time="2025-02-13T19:27:37.618243226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:27:37.619245 containerd[1523]: time="2025-02-13T19:27:37.619184130Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:37.620827 containerd[1523]: time="2025-02-13T19:27:37.620766623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:37.621588 containerd[1523]: time="2025-02-13T19:27:37.621548549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.85628947s" Feb 13 19:27:37.621630 containerd[1523]: time="2025-02-13T19:27:37.621589154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:27:37.631020 containerd[1523]: time="2025-02-13T19:27:37.630092808Z" level=info msg="CreateContainer within sandbox \"c63a16504868d7008f177beaba4580b4943ef6486e34921bb91b3e0e1db567cd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:27:37.644580 containerd[1523]: time="2025-02-13T19:27:37.644522953Z" level=info msg="CreateContainer within sandbox \"c63a16504868d7008f177beaba4580b4943ef6486e34921bb91b3e0e1db567cd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"efbd314205b10a34038a2c2b4fec66b1b8d2b052ec683a2a2059d54b72dbdbd1\"" Feb 13 19:27:37.645184 containerd[1523]: time="2025-02-13T19:27:37.645158943Z" level=info msg="StartContainer for \"efbd314205b10a34038a2c2b4fec66b1b8d2b052ec683a2a2059d54b72dbdbd1\"" Feb 13 19:27:37.832481 containerd[1523]: time="2025-02-13T19:27:37.832025907Z" level=info msg="StartContainer for \"efbd314205b10a34038a2c2b4fec66b1b8d2b052ec683a2a2059d54b72dbdbd1\" returns successfully" Feb 13 19:27:37.838030 kubelet[2692]: E0213 19:27:37.837110 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:37.894794 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:27:37.894902 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:27:38.852944 kubelet[2692]: I0213 19:27:38.852835 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:27:38.853651 kubelet[2692]: E0213 19:27:38.853617 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:39.732910 kubelet[2692]: I0213 19:27:39.732856 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:27:39.733895 kubelet[2692]: E0213 19:27:39.733487 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:39.759111 kubelet[2692]: I0213 19:27:39.758939 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2jvbb" podStartSLOduration=3.205897427 podStartE2EDuration="12.758920333s" podCreationTimestamp="2025-02-13 19:27:27 +0000 UTC" firstStartedPulling="2025-02-13 19:27:28.069417101 +0000 UTC m=+23.496289687" lastFinishedPulling="2025-02-13 19:27:37.622440007 +0000 UTC m=+33.049312593" observedRunningTime="2025-02-13 19:27:37.862256747 +0000 UTC m=+33.289129333" watchObservedRunningTime="2025-02-13 19:27:39.758920333 +0000 UTC m=+35.185792919" Feb 13 19:27:39.856071 kubelet[2692]: E0213 19:27:39.856002 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:40.456009 kernel: bpftool[4089]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:27:40.596300 systemd-networkd[1230]: vxlan.calico: Link UP Feb 13 19:27:40.596309 systemd-networkd[1230]: vxlan.calico: Gained carrier Feb 13 19:27:41.083220 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:54692.service - OpenSSH per-connection server daemon (10.0.0.1:54692). Feb 13 19:27:41.119614 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 54692 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:41.121565 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:41.125994 systemd-logind[1501]: New session 9 of user core. Feb 13 19:27:41.138219 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:27:41.269220 sshd[4161]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:41.273484 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:54692.service: Deactivated successfully. Feb 13 19:27:41.276240 systemd-logind[1501]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:27:41.276292 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:27:41.277608 systemd-logind[1501]: Removed session 9. Feb 13 19:27:42.383187 systemd-networkd[1230]: vxlan.calico: Gained IPv6LL Feb 13 19:27:46.284321 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:58154.service - OpenSSH per-connection server daemon (10.0.0.1:58154). Feb 13 19:27:46.329221 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 58154 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:46.331120 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:46.335575 systemd-logind[1501]: New session 10 of user core. Feb 13 19:27:46.352337 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:27:46.505509 sshd[4189]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:46.518318 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:58170.service - OpenSSH per-connection server daemon (10.0.0.1:58170). Feb 13 19:27:46.518763 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:58154.service: Deactivated successfully. Feb 13 19:27:46.523420 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:27:46.525689 systemd-logind[1501]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:27:46.526859 systemd-logind[1501]: Removed session 10. Feb 13 19:27:46.553686 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 58170 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:46.555349 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:46.561125 systemd-logind[1501]: New session 11 of user core. Feb 13 19:27:46.576057 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:27:46.676820 containerd[1523]: time="2025-02-13T19:27:46.676776650Z" level=info msg="StopPodSandbox for \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\"" Feb 13 19:27:46.779468 sshd[4203]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:46.792332 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:58174.service - OpenSSH per-connection server daemon (10.0.0.1:58174). Feb 13 19:27:46.792767 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:58170.service: Deactivated successfully. Feb 13 19:27:46.804416 systemd-logind[1501]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:27:46.804554 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:27:46.812614 systemd-logind[1501]: Removed session 11. Feb 13 19:27:46.853555 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 58174 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:46.856031 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:46.861367 systemd-logind[1501]: New session 12 of user core. Feb 13 19:27:46.874435 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.804 [INFO][4232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.805 [INFO][4232] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" iface="eth0" netns="/var/run/netns/cni-890ba415-5745-c5bb-7575-199fb089f2f0" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.807 [INFO][4232] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" iface="eth0" netns="/var/run/netns/cni-890ba415-5745-c5bb-7575-199fb089f2f0" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.808 [INFO][4232] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" iface="eth0" netns="/var/run/netns/cni-890ba415-5745-c5bb-7575-199fb089f2f0" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.808 [INFO][4232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.808 [INFO][4232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.939 [INFO][4244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.939 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.939 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.950 [WARNING][4244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.950 [INFO][4244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.952 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:46.957273 containerd[1523]: 2025-02-13 19:27:46.955 [INFO][4232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:27:46.957861 containerd[1523]: time="2025-02-13T19:27:46.957595891Z" level=info msg="TearDown network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\" successfully" Feb 13 19:27:46.957861 containerd[1523]: time="2025-02-13T19:27:46.957641495Z" level=info msg="StopPodSandbox for \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\" returns successfully" Feb 13 19:27:46.958279 kubelet[2692]: E0213 19:27:46.958250 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:46.960305 containerd[1523]: time="2025-02-13T19:27:46.959185707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dshsv,Uid:1e9e0e70-1c2f-4d7c-8d2d-f775675262e1,Namespace:kube-system,Attempt:1,}" Feb 13 19:27:46.959918 systemd[1]: run-netns-cni\x2d890ba415\x2d5745\x2dc5bb\x2d7575\x2d199fb089f2f0.mount: Deactivated successfully. Feb 13 19:27:47.134437 sshd[4240]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:47.138474 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:58174.service: Deactivated successfully. Feb 13 19:27:47.140958 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:27:47.141502 systemd-logind[1501]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:27:47.142521 systemd-logind[1501]: Removed session 12. Feb 13 19:27:47.231644 systemd-networkd[1230]: cali0d399a20e75: Link UP Feb 13 19:27:47.232041 systemd-networkd[1230]: cali0d399a20e75: Gained carrier Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.125 [INFO][4262] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0 coredns-7db6d8ff4d- kube-system 1e9e0e70-1c2f-4d7c-8d2d-f775675262e1 875 0 2025-02-13 19:27:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-dshsv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0d399a20e75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.126 [INFO][4262] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.159 [INFO][4275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" HandleID="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.181 [INFO][4275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" HandleID="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-dshsv", "timestamp":"2025-02-13 19:27:47.159551644 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.182 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.182 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.182 [INFO][4275] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.184 [INFO][4275] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.190 [INFO][4275] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.195 [INFO][4275] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.197 [INFO][4275] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.200 [INFO][4275] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.200 [INFO][4275] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.202 [INFO][4275] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.207 [INFO][4275] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.223 [INFO][4275] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.224 [INFO][4275] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" host="localhost" Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.224 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:47.256178 containerd[1523]: 2025-02-13 19:27:47.224 [INFO][4275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" HandleID="k8s-pod-network.0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:47.262274 containerd[1523]: 2025-02-13 19:27:47.226 [INFO][4262] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-dshsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d399a20e75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:47.262274 containerd[1523]: 2025-02-13 19:27:47.226 [INFO][4262] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:47.262274 containerd[1523]: 2025-02-13 19:27:47.226 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d399a20e75 ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:47.262274 containerd[1523]: 2025-02-13 19:27:47.232 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:47.262274 containerd[1523]: 2025-02-13 19:27:47.236 [INFO][4262] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f", Pod:"coredns-7db6d8ff4d-dshsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d399a20e75", MAC:"12:46:f3:1f:d1:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:47.262274 containerd[1523]: 2025-02-13 19:27:47.253 [INFO][4262] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dshsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:27:47.289824 containerd[1523]: time="2025-02-13T19:27:47.289375569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:47.289824 containerd[1523]: time="2025-02-13T19:27:47.289458576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:47.289824 containerd[1523]: time="2025-02-13T19:27:47.289518941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:47.289824 containerd[1523]: time="2025-02-13T19:27:47.289673234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:47.316409 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:27:47.338501 containerd[1523]: time="2025-02-13T19:27:47.338463310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dshsv,Uid:1e9e0e70-1c2f-4d7c-8d2d-f775675262e1,Namespace:kube-system,Attempt:1,} returns sandbox id \"0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f\"" Feb 13 19:27:47.339532 kubelet[2692]: E0213 19:27:47.339507 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:47.344219 containerd[1523]: time="2025-02-13T19:27:47.344028295Z" level=info msg="CreateContainer within sandbox \"0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:27:47.380245 containerd[1523]: time="2025-02-13T19:27:47.380185235Z" level=info msg="CreateContainer within sandbox \"0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e81d66f7c3072c7ebea1120d35d22f6da38c32aff5c0f44fe3fe732bb9784076\"" Feb 13 19:27:47.382051 containerd[1523]: time="2025-02-13T19:27:47.380920777Z" level=info msg="StartContainer for \"e81d66f7c3072c7ebea1120d35d22f6da38c32aff5c0f44fe3fe732bb9784076\"" Feb 13 19:27:47.441075 containerd[1523]: time="2025-02-13T19:27:47.440825861Z" level=info msg="StartContainer for \"e81d66f7c3072c7ebea1120d35d22f6da38c32aff5c0f44fe3fe732bb9784076\" returns successfully" Feb 13 19:27:47.676280 containerd[1523]: time="2025-02-13T19:27:47.676048550Z" level=info msg="StopPodSandbox for \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\"" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.737 [INFO][4401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.737 [INFO][4401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" iface="eth0" netns="/var/run/netns/cni-df9095ea-6561-5a12-84a9-691fdd10bdfc" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.738 [INFO][4401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" iface="eth0" netns="/var/run/netns/cni-df9095ea-6561-5a12-84a9-691fdd10bdfc" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.738 [INFO][4401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" iface="eth0" netns="/var/run/netns/cni-df9095ea-6561-5a12-84a9-691fdd10bdfc" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.738 [INFO][4401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.738 [INFO][4401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.764 [INFO][4409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.764 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.764 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.773 [WARNING][4409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.773 [INFO][4409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.775 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:47.778991 containerd[1523]: 2025-02-13 19:27:47.777 [INFO][4401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:27:47.779698 containerd[1523]: time="2025-02-13T19:27:47.779107479Z" level=info msg="TearDown network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\" successfully" Feb 13 19:27:47.779698 containerd[1523]: time="2025-02-13T19:27:47.779139602Z" level=info msg="StopPodSandbox for \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\" returns successfully" Feb 13 19:27:47.779747 kubelet[2692]: E0213 19:27:47.779556 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:47.780226 containerd[1523]: time="2025-02-13T19:27:47.780195130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzzls,Uid:623df41b-21a5-4acd-90d5-1d14fa054355,Namespace:kube-system,Attempt:1,}" Feb 13 19:27:47.890320 kubelet[2692]: E0213 19:27:47.890260 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:47.916840 kubelet[2692]: I0213 19:27:47.916705 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dshsv" podStartSLOduration=26.916686412 podStartE2EDuration="26.916686412s" podCreationTimestamp="2025-02-13 19:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:47.902800812 +0000 UTC m=+43.329673438" watchObservedRunningTime="2025-02-13 19:27:47.916686412 +0000 UTC m=+43.343558998" Feb 13 19:27:47.922439 systemd-networkd[1230]: calia782f573b65: Link UP Feb 13 19:27:47.923302 systemd-networkd[1230]: calia782f573b65: Gained carrier Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.833 [INFO][4420] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0 coredns-7db6d8ff4d- kube-system 623df41b-21a5-4acd-90d5-1d14fa054355 909 0 2025-02-13 19:27:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-hzzls eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia782f573b65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.833 [INFO][4420] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.861 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" HandleID="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.874 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" HandleID="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000279590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-hzzls", "timestamp":"2025-02-13 19:27:47.861923477 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.874 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.874 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.874 [INFO][4434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.876 [INFO][4434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.880 [INFO][4434] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.884 [INFO][4434] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.886 [INFO][4434] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.889 [INFO][4434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.889 [INFO][4434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.891 [INFO][4434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.905 [INFO][4434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.911 [INFO][4434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.911 [INFO][4434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" host="localhost" Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.911 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:47.950256 containerd[1523]: 2025-02-13 19:27:47.911 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" HandleID="k8s-pod-network.f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.950924 containerd[1523]: 2025-02-13 19:27:47.915 [INFO][4420] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"623df41b-21a5-4acd-90d5-1d14fa054355", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-hzzls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia782f573b65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:47.950924 containerd[1523]: 2025-02-13 19:27:47.916 [INFO][4420] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.950924 containerd[1523]: 2025-02-13 19:27:47.916 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia782f573b65 ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.950924 containerd[1523]: 2025-02-13 19:27:47.925 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.950924 containerd[1523]: 2025-02-13 19:27:47.930 [INFO][4420] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"623df41b-21a5-4acd-90d5-1d14fa054355", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc", Pod:"coredns-7db6d8ff4d-hzzls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia782f573b65", MAC:"42:de:5c:a2:a5:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:47.950924 containerd[1523]: 2025-02-13 19:27:47.947 [INFO][4420] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hzzls" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:27:47.962894 systemd[1]: run-netns-cni\x2ddf9095ea\x2d6561\x2d5a12\x2d84a9\x2d691fdd10bdfc.mount: Deactivated successfully. Feb 13 19:27:47.978338 containerd[1523]: time="2025-02-13T19:27:47.978243834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:47.978338 containerd[1523]: time="2025-02-13T19:27:47.978304839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:47.978338 containerd[1523]: time="2025-02-13T19:27:47.978329441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:47.979640 containerd[1523]: time="2025-02-13T19:27:47.978421449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:48.010493 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:27:48.032623 containerd[1523]: time="2025-02-13T19:27:48.032327495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzzls,Uid:623df41b-21a5-4acd-90d5-1d14fa054355,Namespace:kube-system,Attempt:1,} returns sandbox id \"f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc\"" Feb 13 19:27:48.033049 kubelet[2692]: E0213 19:27:48.033027 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:48.037661 containerd[1523]: time="2025-02-13T19:27:48.037057842Z" level=info msg="CreateContainer within sandbox \"f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:27:48.056309 containerd[1523]: time="2025-02-13T19:27:48.056254930Z" level=info msg="CreateContainer within sandbox \"f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa8ebba9c699c54daacc54b3cad483c74bfbaf3a246fd9e6ee7d8296420c43f2\"" Feb 13 19:27:48.056888 containerd[1523]: time="2025-02-13T19:27:48.056836418Z" level=info msg="StartContainer for \"aa8ebba9c699c54daacc54b3cad483c74bfbaf3a246fd9e6ee7d8296420c43f2\"" Feb 13 19:27:48.108958 containerd[1523]: time="2025-02-13T19:27:48.108907833Z" level=info msg="StartContainer for \"aa8ebba9c699c54daacc54b3cad483c74bfbaf3a246fd9e6ee7d8296420c43f2\" returns successfully" Feb 13 19:27:48.675378 containerd[1523]: time="2025-02-13T19:27:48.675307561Z" level=info msg="StopPodSandbox for \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\"" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.722 [INFO][4555] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.723 [INFO][4555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" iface="eth0" netns="/var/run/netns/cni-0ddb2867-02f7-67ba-0136-681bf1275ef3" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.723 [INFO][4555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" iface="eth0" netns="/var/run/netns/cni-0ddb2867-02f7-67ba-0136-681bf1275ef3" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.723 [INFO][4555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" iface="eth0" netns="/var/run/netns/cni-0ddb2867-02f7-67ba-0136-681bf1275ef3" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.723 [INFO][4555] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.723 [INFO][4555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.741 [INFO][4563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.741 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.741 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.750 [WARNING][4563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.750 [INFO][4563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.751 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:48.755052 containerd[1523]: 2025-02-13 19:27:48.753 [INFO][4555] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:27:48.756684 containerd[1523]: time="2025-02-13T19:27:48.755891186Z" level=info msg="TearDown network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\" successfully" Feb 13 19:27:48.756684 containerd[1523]: time="2025-02-13T19:27:48.756480035Z" level=info msg="StopPodSandbox for \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\" returns successfully" Feb 13 19:27:48.757156 containerd[1523]: time="2025-02-13T19:27:48.757100045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-gpzrw,Uid:d3fc815c-da0d-4681-bd91-c9162f51c3d8,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:27:48.847624 systemd-networkd[1230]: cali0d399a20e75: Gained IPv6LL Feb 13 19:27:48.877721 systemd-networkd[1230]: caliae9a19791bd: Link UP Feb 13 19:27:48.878485 systemd-networkd[1230]: caliae9a19791bd: Gained carrier Feb 13 19:27:48.904424 kubelet[2692]: E0213 19:27:48.902451 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.799 [INFO][4572] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0 calico-apiserver-554b9784b8- calico-apiserver d3fc815c-da0d-4681-bd91-c9162f51c3d8 935 0 2025-02-13 19:27:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:554b9784b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-554b9784b8-gpzrw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae9a19791bd [] []}} ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.799 [INFO][4572] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.826 [INFO][4585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" HandleID="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.838 [INFO][4585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" HandleID="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d95c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-554b9784b8-gpzrw", "timestamp":"2025-02-13 19:27:48.826710494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.838 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.838 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.838 [INFO][4585] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.840 [INFO][4585] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.845 [INFO][4585] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.850 [INFO][4585] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.852 [INFO][4585] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.855 [INFO][4585] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.855 [INFO][4585] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.857 [INFO][4585] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.862 [INFO][4585] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.871 [INFO][4585] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.871 [INFO][4585] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" host="localhost" Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.871 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:48.907828 containerd[1523]: 2025-02-13 19:27:48.871 [INFO][4585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" HandleID="k8s-pod-network.b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.915378 containerd[1523]: 2025-02-13 19:27:48.875 [INFO][4572] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3fc815c-da0d-4681-bd91-c9162f51c3d8", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-554b9784b8-gpzrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae9a19791bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:48.915378 containerd[1523]: 2025-02-13 19:27:48.875 [INFO][4572] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.915378 containerd[1523]: 2025-02-13 19:27:48.875 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae9a19791bd ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.915378 containerd[1523]: 2025-02-13 19:27:48.878 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.915378 containerd[1523]: 2025-02-13 19:27:48.879 [INFO][4572] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3fc815c-da0d-4681-bd91-c9162f51c3d8", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba", Pod:"calico-apiserver-554b9784b8-gpzrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae9a19791bd", MAC:"ba:69:b2:ae:98:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:48.915378 containerd[1523]: 2025-02-13 19:27:48.896 [INFO][4572] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-gpzrw" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:27:48.916688 kubelet[2692]: E0213 19:27:48.911918 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:48.930296 kubelet[2692]: I0213 19:27:48.929727 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hzzls" podStartSLOduration=27.929704511 podStartE2EDuration="27.929704511s" podCreationTimestamp="2025-02-13 19:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:48.923074729 +0000 UTC m=+44.349947355" watchObservedRunningTime="2025-02-13 19:27:48.929704511 +0000 UTC m=+44.356577097" Feb 13 19:27:48.964232 systemd[1]: run-netns-cni\x2d0ddb2867\x2d02f7\x2d67ba\x2d0136\x2d681bf1275ef3.mount: Deactivated successfully. Feb 13 19:27:48.975839 containerd[1523]: time="2025-02-13T19:27:48.974923606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:48.975839 containerd[1523]: time="2025-02-13T19:27:48.975812039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:48.975839 containerd[1523]: time="2025-02-13T19:27:48.975825440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:48.976296 containerd[1523]: time="2025-02-13T19:27:48.975936729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:49.009532 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:27:49.041811 systemd-networkd[1230]: calia782f573b65: Gained IPv6LL Feb 13 19:27:49.054028 containerd[1523]: time="2025-02-13T19:27:49.053947054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-gpzrw,Uid:d3fc815c-da0d-4681-bd91-c9162f51c3d8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba\"" Feb 13 19:27:49.055855 containerd[1523]: time="2025-02-13T19:27:49.055826805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:27:49.680797 containerd[1523]: time="2025-02-13T19:27:49.676581039Z" level=info msg="StopPodSandbox for \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\"" Feb 13 19:27:49.680797 containerd[1523]: time="2025-02-13T19:27:49.677910106Z" level=info msg="StopPodSandbox for \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\"" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.735 [INFO][4684] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.735 [INFO][4684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" iface="eth0" netns="/var/run/netns/cni-92524800-792d-6370-fc54-465b13359e91" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.738 [INFO][4684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" iface="eth0" netns="/var/run/netns/cni-92524800-792d-6370-fc54-465b13359e91" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.738 [INFO][4684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" iface="eth0" netns="/var/run/netns/cni-92524800-792d-6370-fc54-465b13359e91" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.738 [INFO][4684] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.738 [INFO][4684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.762 [INFO][4695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.763 [INFO][4695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.763 [INFO][4695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.774 [WARNING][4695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.774 [INFO][4695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.776 [INFO][4695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:49.781050 containerd[1523]: 2025-02-13 19:27:49.779 [INFO][4684] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:27:49.782999 containerd[1523]: time="2025-02-13T19:27:49.782949951Z" level=info msg="TearDown network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\" successfully" Feb 13 19:27:49.782999 containerd[1523]: time="2025-02-13T19:27:49.782998075Z" level=info msg="StopPodSandbox for \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\" returns successfully" Feb 13 19:27:49.783543 systemd[1]: run-netns-cni\x2d92524800\x2d792d\x2d6370\x2dfc54\x2d465b13359e91.mount: Deactivated successfully. Feb 13 19:27:49.789676 containerd[1523]: time="2025-02-13T19:27:49.789624765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8b599d7-g4h4n,Uid:6023b2dc-e788-490d-952a-daba9fbad29a,Namespace:calico-system,Attempt:1,}" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.742 [INFO][4675] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.742 [INFO][4675] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" iface="eth0" netns="/var/run/netns/cni-ad710518-f27e-0402-2c15-f84ee19e86d4" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.742 [INFO][4675] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" iface="eth0" netns="/var/run/netns/cni-ad710518-f27e-0402-2c15-f84ee19e86d4" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.743 [INFO][4675] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" iface="eth0" netns="/var/run/netns/cni-ad710518-f27e-0402-2c15-f84ee19e86d4" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.743 [INFO][4675] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.743 [INFO][4675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.769 [INFO][4700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.769 [INFO][4700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.776 [INFO][4700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.789 [WARNING][4700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.789 [INFO][4700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.791 [INFO][4700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:49.795718 containerd[1523]: 2025-02-13 19:27:49.794 [INFO][4675] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:27:49.796232 containerd[1523]: time="2025-02-13T19:27:49.796064521Z" level=info msg="TearDown network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\" successfully" Feb 13 19:27:49.796232 containerd[1523]: time="2025-02-13T19:27:49.796092883Z" level=info msg="StopPodSandbox for \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\" returns successfully" Feb 13 19:27:49.797634 containerd[1523]: time="2025-02-13T19:27:49.797444151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-r6m9z,Uid:7fc34767-db27-421e-861b-6bd72627f37b,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:27:49.800551 systemd[1]: run-netns-cni\x2dad710518\x2df27e\x2d0402\x2d2c15\x2df84ee19e86d4.mount: Deactivated successfully. Feb 13 19:27:49.906055 kubelet[2692]: E0213 19:27:49.906021 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:49.906609 kubelet[2692]: E0213 19:27:49.906417 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:49.970680 systemd-networkd[1230]: calie97db214025: Link UP Feb 13 19:27:49.970845 systemd-networkd[1230]: calie97db214025: Gained carrier Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.863 [INFO][4715] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0 calico-apiserver-554b9784b8- calico-apiserver 7fc34767-db27-421e-861b-6bd72627f37b 955 0 2025-02-13 19:27:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:554b9784b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-554b9784b8-r6m9z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie97db214025 [] []}} ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.864 [INFO][4715] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.896 [INFO][4737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" HandleID="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.912 [INFO][4737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" HandleID="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e2a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-554b9784b8-r6m9z", "timestamp":"2025-02-13 19:27:49.896386749 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.913 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.914 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.914 [INFO][4737] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.916 [INFO][4737] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.921 [INFO][4737] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.927 [INFO][4737] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.929 [INFO][4737] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.932 [INFO][4737] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.932 [INFO][4737] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.934 [INFO][4737] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05 Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.939 [INFO][4737] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.948 [INFO][4737] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.948 [INFO][4737] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" host="localhost" Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.948 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:50.005460 containerd[1523]: 2025-02-13 19:27:49.948 [INFO][4737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" HandleID="k8s-pod-network.7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:50.006857 containerd[1523]: 2025-02-13 19:27:49.956 [INFO][4715] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fc34767-db27-421e-861b-6bd72627f37b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-554b9784b8-r6m9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie97db214025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:50.006857 containerd[1523]: 2025-02-13 19:27:49.956 [INFO][4715] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:50.006857 containerd[1523]: 2025-02-13 19:27:49.956 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie97db214025 ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:50.006857 containerd[1523]: 2025-02-13 19:27:49.966 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:50.006857 containerd[1523]: 2025-02-13 19:27:49.968 [INFO][4715] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fc34767-db27-421e-861b-6bd72627f37b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05", Pod:"calico-apiserver-554b9784b8-r6m9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie97db214025", MAC:"2e:23:20:7f:fa:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:50.006857 containerd[1523]: 2025-02-13 19:27:49.997 [INFO][4715] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05" Namespace="calico-apiserver" Pod="calico-apiserver-554b9784b8-r6m9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:27:50.038458 containerd[1523]: time="2025-02-13T19:27:50.036745964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:50.038458 containerd[1523]: time="2025-02-13T19:27:50.036812489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:50.038458 containerd[1523]: time="2025-02-13T19:27:50.036844731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:50.038458 containerd[1523]: time="2025-02-13T19:27:50.036933138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:50.047946 systemd-networkd[1230]: cali88c9055c117: Link UP Feb 13 19:27:50.048401 systemd-networkd[1230]: cali88c9055c117: Gained carrier Feb 13 19:27:50.063726 systemd[1]: run-containerd-runc-k8s.io-7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05-runc.NiQMuI.mount: Deactivated successfully. Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.881 [INFO][4725] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0 calico-kube-controllers-cd8b599d7- calico-system 6023b2dc-e788-490d-952a-daba9fbad29a 954 0 2025-02-13 19:27:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd8b599d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cd8b599d7-g4h4n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali88c9055c117 [] []}} ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.881 [INFO][4725] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.925 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" HandleID="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.937 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" HandleID="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cd8b599d7-g4h4n", "timestamp":"2025-02-13 19:27:49.925166892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.937 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.952 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.952 [INFO][4743] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.962 [INFO][4743] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:49.993 [INFO][4743] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.008 [INFO][4743] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.011 [INFO][4743] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.018 [INFO][4743] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.018 [INFO][4743] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.021 [INFO][4743] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2 Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.027 [INFO][4743] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.037 [INFO][4743] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.038 [INFO][4743] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" host="localhost" Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.038 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:50.069733 containerd[1523]: 2025-02-13 19:27:50.038 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" HandleID="k8s-pod-network.f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:50.071695 containerd[1523]: 2025-02-13 19:27:50.042 [INFO][4725] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0", GenerateName:"calico-kube-controllers-cd8b599d7-", Namespace:"calico-system", SelfLink:"", UID:"6023b2dc-e788-490d-952a-daba9fbad29a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8b599d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cd8b599d7-g4h4n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c9055c117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:50.071695 containerd[1523]: 2025-02-13 19:27:50.042 [INFO][4725] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:50.071695 containerd[1523]: 2025-02-13 19:27:50.042 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88c9055c117 ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:50.071695 containerd[1523]: 2025-02-13 19:27:50.049 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:50.071695 containerd[1523]: 2025-02-13 19:27:50.049 [INFO][4725] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0", GenerateName:"calico-kube-controllers-cd8b599d7-", Namespace:"calico-system", SelfLink:"", UID:"6023b2dc-e788-490d-952a-daba9fbad29a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8b599d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2", Pod:"calico-kube-controllers-cd8b599d7-g4h4n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c9055c117", MAC:"d2:aa:84:2b:de:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:50.071695 containerd[1523]: 2025-02-13 19:27:50.061 [INFO][4725] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2" Namespace="calico-system" Pod="calico-kube-controllers-cd8b599d7-g4h4n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:27:50.082032 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:27:50.098950 containerd[1523]: time="2025-02-13T19:27:50.092308921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:50.098950 containerd[1523]: time="2025-02-13T19:27:50.092369966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:50.098950 containerd[1523]: time="2025-02-13T19:27:50.092384927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:50.098950 containerd[1523]: time="2025-02-13T19:27:50.092477375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:50.113669 containerd[1523]: time="2025-02-13T19:27:50.113625953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b9784b8-r6m9z,Uid:7fc34767-db27-421e-861b-6bd72627f37b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05\"" Feb 13 19:27:50.151345 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:27:50.177441 containerd[1523]: time="2025-02-13T19:27:50.177385914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8b599d7-g4h4n,Uid:6023b2dc-e788-490d-952a-daba9fbad29a,Namespace:calico-system,Attempt:1,} returns sandbox id \"f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2\"" Feb 13 19:27:50.621485 containerd[1523]: time="2025-02-13T19:27:50.621429940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:50.621925 containerd[1523]: time="2025-02-13T19:27:50.621881056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 19:27:50.622785 containerd[1523]: time="2025-02-13T19:27:50.622720961Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:50.624934 containerd[1523]: time="2025-02-13T19:27:50.624871010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:50.625759 containerd[1523]: time="2025-02-13T19:27:50.625724037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.569860949s" Feb 13 19:27:50.625836 containerd[1523]: time="2025-02-13T19:27:50.625762520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 19:27:50.628200 containerd[1523]: time="2025-02-13T19:27:50.628002256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:27:50.629097 containerd[1523]: time="2025-02-13T19:27:50.629066699Z" level=info msg="CreateContainer within sandbox \"b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:27:50.787926 containerd[1523]: time="2025-02-13T19:27:50.787880155Z" level=info msg="CreateContainer within sandbox \"b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0b6c676d4c7ddaf3df4ed73617dae620d0d6b0428e8a10529bdf25d158ff5362\"" Feb 13 19:27:50.788445 containerd[1523]: time="2025-02-13T19:27:50.788420637Z" level=info msg="StartContainer for \"0b6c676d4c7ddaf3df4ed73617dae620d0d6b0428e8a10529bdf25d158ff5362\"" Feb 13 19:27:50.897709 containerd[1523]: time="2025-02-13T19:27:50.895669689Z" level=info msg="StartContainer for \"0b6c676d4c7ddaf3df4ed73617dae620d0d6b0428e8a10529bdf25d158ff5362\" returns successfully" Feb 13 19:27:50.897928 systemd-networkd[1230]: caliae9a19791bd: Gained IPv6LL Feb 13 19:27:50.913697 kubelet[2692]: E0213 19:27:50.912791 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:50.932367 kubelet[2692]: I0213 19:27:50.931488 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-554b9784b8-gpzrw" podStartSLOduration=23.359411814 podStartE2EDuration="24.931467296s" podCreationTimestamp="2025-02-13 19:27:26 +0000 UTC" firstStartedPulling="2025-02-13 19:27:49.055288322 +0000 UTC m=+44.482160908" lastFinishedPulling="2025-02-13 19:27:50.627343804 +0000 UTC m=+46.054216390" observedRunningTime="2025-02-13 19:27:50.929257003 +0000 UTC m=+46.356129589" watchObservedRunningTime="2025-02-13 19:27:50.931467296 +0000 UTC m=+46.358339882" Feb 13 19:27:51.070523 containerd[1523]: time="2025-02-13T19:27:51.070451893Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:51.074245 containerd[1523]: time="2025-02-13T19:27:51.074204742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:27:51.076569 containerd[1523]: time="2025-02-13T19:27:51.076533361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 448.493502ms" Feb 13 19:27:51.076618 containerd[1523]: time="2025-02-13T19:27:51.076570484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 19:27:51.078051 containerd[1523]: time="2025-02-13T19:27:51.078018835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:27:51.080493 containerd[1523]: time="2025-02-13T19:27:51.080376817Z" level=info msg="CreateContainer within sandbox \"7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:27:51.151074 containerd[1523]: time="2025-02-13T19:27:51.150625621Z" level=info msg="CreateContainer within sandbox \"7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a1195584f88cba85779a6f347d15313e171e6f3e612fca0928ec2ee8a1a301bf\"" Feb 13 19:27:51.151340 containerd[1523]: time="2025-02-13T19:27:51.151308914Z" level=info msg="StartContainer for \"a1195584f88cba85779a6f347d15313e171e6f3e612fca0928ec2ee8a1a301bf\"" Feb 13 19:27:51.336784 containerd[1523]: time="2025-02-13T19:27:51.336665614Z" level=info msg="StartContainer for \"a1195584f88cba85779a6f347d15313e171e6f3e612fca0928ec2ee8a1a301bf\" returns successfully" Feb 13 19:27:51.663293 systemd-networkd[1230]: cali88c9055c117: Gained IPv6LL Feb 13 19:27:51.676356 containerd[1523]: time="2025-02-13T19:27:51.675960398Z" level=info msg="StopPodSandbox for \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\"" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.732 [INFO][4973] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.732 [INFO][4973] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" iface="eth0" netns="/var/run/netns/cni-5ff7111a-e7da-00c7-3740-fff77df7e1bb" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.733 [INFO][4973] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" iface="eth0" netns="/var/run/netns/cni-5ff7111a-e7da-00c7-3740-fff77df7e1bb" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.733 [INFO][4973] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" iface="eth0" netns="/var/run/netns/cni-5ff7111a-e7da-00c7-3740-fff77df7e1bb" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.733 [INFO][4973] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.733 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.766 [INFO][4980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.766 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.766 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.776 [WARNING][4980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.776 [INFO][4980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.781 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:51.785789 containerd[1523]: 2025-02-13 19:27:51.783 [INFO][4973] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:27:51.786813 containerd[1523]: time="2025-02-13T19:27:51.786433218Z" level=info msg="TearDown network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\" successfully" Feb 13 19:27:51.786813 containerd[1523]: time="2025-02-13T19:27:51.786464980Z" level=info msg="StopPodSandbox for \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\" returns successfully" Feb 13 19:27:51.787661 containerd[1523]: time="2025-02-13T19:27:51.787581546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7x2p,Uid:046604b2-014e-4614-a6a9-a156d305f1ec,Namespace:calico-system,Attempt:1,}" Feb 13 19:27:51.791291 systemd-networkd[1230]: calie97db214025: Gained IPv6LL Feb 13 19:27:51.917477 kubelet[2692]: I0213 19:27:51.916585 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:27:51.938016 kubelet[2692]: I0213 19:27:51.937923 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-554b9784b8-r6m9z" podStartSLOduration=24.977661954 podStartE2EDuration="25.937901471s" podCreationTimestamp="2025-02-13 19:27:26 +0000 UTC" firstStartedPulling="2025-02-13 19:27:50.116988577 +0000 UTC m=+45.543861163" lastFinishedPulling="2025-02-13 19:27:51.077228094 +0000 UTC m=+46.504100680" observedRunningTime="2025-02-13 19:27:51.93749788 +0000 UTC m=+47.364370466" watchObservedRunningTime="2025-02-13 19:27:51.937901471 +0000 UTC m=+47.364774057" Feb 13 19:27:51.951029 systemd-networkd[1230]: cali87efc7103dd: Link UP Feb 13 19:27:51.951955 systemd-networkd[1230]: cali87efc7103dd: Gained carrier Feb 13 19:27:51.967371 systemd[1]: run-netns-cni\x2d5ff7111a\x2de7da\x2d00c7\x2d3740\x2dfff77df7e1bb.mount: Deactivated successfully. Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.839 [INFO][4989] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c7x2p-eth0 csi-node-driver- calico-system 046604b2-014e-4614-a6a9-a156d305f1ec 986 0 2025-02-13 19:27:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c7x2p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali87efc7103dd [] []}} ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.840 [INFO][4989] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.873 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" HandleID="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.885 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" HandleID="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293a50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c7x2p", "timestamp":"2025-02-13 19:27:51.873266378 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.885 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.886 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.886 [INFO][5003] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.888 [INFO][5003] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.893 [INFO][5003] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.901 [INFO][5003] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.904 [INFO][5003] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.907 [INFO][5003] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.907 [INFO][5003] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.909 [INFO][5003] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.914 [INFO][5003] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.939 [INFO][5003] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.939 [INFO][5003] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" host="localhost" Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.939 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:27:51.977130 containerd[1523]: 2025-02-13 19:27:51.939 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" HandleID="k8s-pod-network.b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.977907 containerd[1523]: 2025-02-13 19:27:51.947 [INFO][4989] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7x2p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"046604b2-014e-4614-a6a9-a156d305f1ec", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c7x2p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87efc7103dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:51.977907 containerd[1523]: 2025-02-13 19:27:51.947 [INFO][4989] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.977907 containerd[1523]: 2025-02-13 19:27:51.947 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87efc7103dd ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.977907 containerd[1523]: 2025-02-13 19:27:51.951 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:51.977907 containerd[1523]: 2025-02-13 19:27:51.952 [INFO][4989] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7x2p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"046604b2-014e-4614-a6a9-a156d305f1ec", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d", Pod:"csi-node-driver-c7x2p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87efc7103dd", MAC:"02:b1:5e:26:a0:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:27:51.977907 containerd[1523]: 2025-02-13 19:27:51.973 [INFO][4989] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d" Namespace="calico-system" Pod="csi-node-driver-c7x2p" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:27:52.007698 containerd[1523]: time="2025-02-13T19:27:52.007570022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:52.007698 containerd[1523]: time="2025-02-13T19:27:52.007671189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:52.007698 containerd[1523]: time="2025-02-13T19:27:52.007683150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:52.007959 containerd[1523]: time="2025-02-13T19:27:52.007796959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:52.056792 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:27:52.070360 containerd[1523]: time="2025-02-13T19:27:52.070312601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7x2p,Uid:046604b2-014e-4614-a6a9-a156d305f1ec,Namespace:calico-system,Attempt:1,} returns sandbox id \"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d\"" Feb 13 19:27:52.144302 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:58190.service - OpenSSH per-connection server daemon (10.0.0.1:58190). Feb 13 19:27:52.206035 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 58190 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:52.205942 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:52.219472 systemd-logind[1501]: New session 13 of user core. Feb 13 19:27:52.232500 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:27:52.543069 sshd[5069]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:52.557566 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:45936.service - OpenSSH per-connection server daemon (10.0.0.1:45936). Feb 13 19:27:52.558522 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:58190.service: Deactivated successfully. Feb 13 19:27:52.567385 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:27:52.571671 systemd-logind[1501]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:27:52.574156 systemd-logind[1501]: Removed session 13. Feb 13 19:27:52.612845 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 45936 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:52.614994 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:52.621689 systemd-logind[1501]: New session 14 of user core. Feb 13 19:27:52.627569 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:27:52.874097 containerd[1523]: time="2025-02-13T19:27:52.874027550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:52.875771 containerd[1523]: time="2025-02-13T19:27:52.875731879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 19:27:52.877694 containerd[1523]: time="2025-02-13T19:27:52.877653384Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:52.881676 containerd[1523]: time="2025-02-13T19:27:52.881631604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:52.883045 containerd[1523]: time="2025-02-13T19:27:52.882356579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.804302661s" Feb 13 19:27:52.883045 containerd[1523]: time="2025-02-13T19:27:52.882398462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 19:27:52.883705 containerd[1523]: time="2025-02-13T19:27:52.883677399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:27:52.894307 containerd[1523]: time="2025-02-13T19:27:52.894156911Z" level=info msg="CreateContainer within sandbox \"f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:27:52.919314 containerd[1523]: time="2025-02-13T19:27:52.918190366Z" level=info msg="CreateContainer within sandbox \"f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"937fce25d3840b4c6b0a2ac87cb3c3de648779fb57b77cedc6f81aaa50293e06\"" Feb 13 19:27:52.919314 containerd[1523]: time="2025-02-13T19:27:52.919277008Z" level=info msg="StartContainer for \"937fce25d3840b4c6b0a2ac87cb3c3de648779fb57b77cedc6f81aaa50293e06\"" Feb 13 19:27:52.923857 kubelet[2692]: I0213 19:27:52.923804 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:27:53.038443 containerd[1523]: time="2025-02-13T19:27:53.038394237Z" level=info msg="StartContainer for \"937fce25d3840b4c6b0a2ac87cb3c3de648779fb57b77cedc6f81aaa50293e06\" returns successfully" Feb 13 19:27:53.147463 sshd[5085]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:53.154240 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:45950.service - OpenSSH per-connection server daemon (10.0.0.1:45950). Feb 13 19:27:53.162086 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:45936.service: Deactivated successfully. Feb 13 19:27:53.166259 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:27:53.181933 systemd-logind[1501]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:27:53.184182 systemd-logind[1501]: Removed session 14. Feb 13 19:27:53.209093 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 45950 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:53.211720 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:53.219580 systemd-logind[1501]: New session 15 of user core. Feb 13 19:27:53.235855 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:27:53.777194 systemd-networkd[1230]: cali87efc7103dd: Gained IPv6LL Feb 13 19:27:53.942111 kubelet[2692]: I0213 19:27:53.942042 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cd8b599d7-g4h4n" podStartSLOduration=24.238870468000002 podStartE2EDuration="26.942016146s" podCreationTimestamp="2025-02-13 19:27:27 +0000 UTC" firstStartedPulling="2025-02-13 19:27:50.18039015 +0000 UTC m=+45.607262736" lastFinishedPulling="2025-02-13 19:27:52.883535708 +0000 UTC m=+48.310408414" observedRunningTime="2025-02-13 19:27:53.94194494 +0000 UTC m=+49.368817526" watchObservedRunningTime="2025-02-13 19:27:53.942016146 +0000 UTC m=+49.368888732" Feb 13 19:27:54.127999 containerd[1523]: time="2025-02-13T19:27:54.127930949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:54.130002 containerd[1523]: time="2025-02-13T19:27:54.128509991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:27:54.130002 containerd[1523]: time="2025-02-13T19:27:54.129335531Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:54.131206 containerd[1523]: time="2025-02-13T19:27:54.131170065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:54.131969 containerd[1523]: time="2025-02-13T19:27:54.131934801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.248227679s" Feb 13 19:27:54.131969 containerd[1523]: time="2025-02-13T19:27:54.131963723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:27:54.135178 containerd[1523]: time="2025-02-13T19:27:54.135136515Z" level=info msg="CreateContainer within sandbox \"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:27:54.150910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694247096.mount: Deactivated successfully. Feb 13 19:27:54.153186 containerd[1523]: time="2025-02-13T19:27:54.153143869Z" level=info msg="CreateContainer within sandbox \"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5c6f0988a7942a4a280ff0652d2e91e4728774dc113e52d4f9c9cbde029c5c34\"" Feb 13 19:27:54.154206 containerd[1523]: time="2025-02-13T19:27:54.154173984Z" level=info msg="StartContainer for \"5c6f0988a7942a4a280ff0652d2e91e4728774dc113e52d4f9c9cbde029c5c34\"" Feb 13 19:27:54.227610 containerd[1523]: time="2025-02-13T19:27:54.227486775Z" level=info msg="StartContainer for \"5c6f0988a7942a4a280ff0652d2e91e4728774dc113e52d4f9c9cbde029c5c34\" returns successfully" Feb 13 19:27:54.230368 containerd[1523]: time="2025-02-13T19:27:54.230152970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:27:54.811313 sshd[5135]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:54.826151 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:45958.service - OpenSSH per-connection server daemon (10.0.0.1:45958). Feb 13 19:27:54.826614 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:45950.service: Deactivated successfully. Feb 13 19:27:54.834343 systemd-logind[1501]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:27:54.834375 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:27:54.840391 systemd-logind[1501]: Removed session 15. Feb 13 19:27:54.865035 sshd[5194]: Accepted publickey for core from 10.0.0.1 port 45958 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:54.866309 sshd[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:54.870396 systemd-logind[1501]: New session 16 of user core. Feb 13 19:27:54.881340 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:27:54.933146 kubelet[2692]: I0213 19:27:54.933112 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:27:55.205777 sshd[5194]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:55.217345 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:45974.service - OpenSSH per-connection server daemon (10.0.0.1:45974). Feb 13 19:27:55.217763 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:45958.service: Deactivated successfully. Feb 13 19:27:55.228056 systemd-logind[1501]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:27:55.229932 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:27:55.245293 systemd-logind[1501]: Removed session 16. Feb 13 19:27:55.268372 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 45974 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:27:55.270026 sshd[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:55.277223 systemd-logind[1501]: New session 17 of user core. Feb 13 19:27:55.283322 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:27:55.392443 kubelet[2692]: I0213 19:27:55.392101 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:27:55.393544 kubelet[2692]: E0213 19:27:55.393265 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:55.395286 containerd[1523]: time="2025-02-13T19:27:55.395196674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:55.396626 containerd[1523]: time="2025-02-13T19:27:55.396577013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:27:55.398615 containerd[1523]: time="2025-02-13T19:27:55.398580197Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:55.400597 containerd[1523]: time="2025-02-13T19:27:55.400557459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:55.401152 containerd[1523]: time="2025-02-13T19:27:55.401109339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.170917366s" Feb 13 19:27:55.401191 containerd[1523]: time="2025-02-13T19:27:55.401160182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:27:55.404577 containerd[1523]: time="2025-02-13T19:27:55.404344371Z" level=info msg="CreateContainer within sandbox \"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:27:55.419731 containerd[1523]: time="2025-02-13T19:27:55.419682553Z" level=info msg="CreateContainer within sandbox \"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"da0b8dd2bd7d2d083f1655408c42a23ad62b2323bf170ed6a62f809f7eb2a505\"" Feb 13 19:27:55.423403 containerd[1523]: time="2025-02-13T19:27:55.421211063Z" level=info msg="StartContainer for \"da0b8dd2bd7d2d083f1655408c42a23ad62b2323bf170ed6a62f809f7eb2a505\"" Feb 13 19:27:55.459952 sshd[5208]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:55.464202 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:45974.service: Deactivated successfully. Feb 13 19:27:55.466693 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:27:55.468953 systemd-logind[1501]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:27:55.469931 systemd-logind[1501]: Removed session 17. Feb 13 19:27:55.565481 containerd[1523]: time="2025-02-13T19:27:55.565424063Z" level=info msg="StartContainer for \"da0b8dd2bd7d2d083f1655408c42a23ad62b2323bf170ed6a62f809f7eb2a505\" returns successfully" Feb 13 19:27:55.768615 kubelet[2692]: I0213 19:27:55.768495 2692 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:27:55.773378 kubelet[2692]: I0213 19:27:55.773339 2692 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:27:55.939772 kubelet[2692]: E0213 19:27:55.939558 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:27:55.958595 kubelet[2692]: I0213 19:27:55.958523 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c7x2p" podStartSLOduration=25.627311061 podStartE2EDuration="28.958505381s" podCreationTimestamp="2025-02-13 19:27:27 +0000 UTC" firstStartedPulling="2025-02-13 19:27:52.071867439 +0000 UTC m=+47.498740025" lastFinishedPulling="2025-02-13 19:27:55.403061759 +0000 UTC m=+50.829934345" observedRunningTime="2025-02-13 19:27:55.957303894 +0000 UTC m=+51.384176481" watchObservedRunningTime="2025-02-13 19:27:55.958505381 +0000 UTC m=+51.385377967" Feb 13 19:28:00.475279 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:45990.service - OpenSSH per-connection server daemon (10.0.0.1:45990). Feb 13 19:28:00.510097 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 45990 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:28:00.511621 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:00.522836 systemd-logind[1501]: New session 18 of user core. Feb 13 19:28:00.536332 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:28:00.724922 kubelet[2692]: I0213 19:28:00.724718 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:28:00.797758 sshd[5319]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:00.805872 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:45990.service: Deactivated successfully. Feb 13 19:28:00.818390 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:28:00.822016 systemd-logind[1501]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:28:00.825069 systemd-logind[1501]: Removed session 18. Feb 13 19:28:04.646505 containerd[1523]: time="2025-02-13T19:28:04.646470980Z" level=info msg="StopPodSandbox for \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\"" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.692 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fc34767-db27-421e-861b-6bd72627f37b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05", Pod:"calico-apiserver-554b9784b8-r6m9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie97db214025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.692 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.692 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" iface="eth0" netns="" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.692 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.692 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.717 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.717 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.717 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.727 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.727 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.728 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:04.731645 containerd[1523]: 2025-02-13 19:28:04.729 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.731645 containerd[1523]: time="2025-02-13T19:28:04.731542603Z" level=info msg="TearDown network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\" successfully" Feb 13 19:28:04.731645 containerd[1523]: time="2025-02-13T19:28:04.731564404Z" level=info msg="StopPodSandbox for \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\" returns successfully" Feb 13 19:28:04.733617 containerd[1523]: time="2025-02-13T19:28:04.731909747Z" level=info msg="RemovePodSandbox for \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\"" Feb 13 19:28:04.735515 containerd[1523]: time="2025-02-13T19:28:04.734801372Z" level=info msg="Forcibly stopping sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\"" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.772 [WARNING][5428] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fc34767-db27-421e-861b-6bd72627f37b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a9d27aad2c081f76269dbf59e7ee09574d756aa9b5a3fc2070c91ec73ec4b05", Pod:"calico-apiserver-554b9784b8-r6m9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie97db214025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.772 [INFO][5428] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.772 [INFO][5428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" iface="eth0" netns="" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.772 [INFO][5428] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.772 [INFO][5428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.793 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.793 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.793 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.802 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.802 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" HandleID="k8s-pod-network.31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Workload="localhost-k8s-calico--apiserver--554b9784b8--r6m9z-eth0" Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.804 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:04.807455 containerd[1523]: 2025-02-13 19:28:04.806 [INFO][5428] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97" Feb 13 19:28:04.809216 containerd[1523]: time="2025-02-13T19:28:04.807960350Z" level=info msg="TearDown network for sandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\" successfully" Feb 13 19:28:04.852734 containerd[1523]: time="2025-02-13T19:28:04.852641499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:04.852847 containerd[1523]: time="2025-02-13T19:28:04.852774548Z" level=info msg="RemovePodSandbox \"31ed4404c5c08bd226b6fc843e0b97570bd075beecb10609d18fbaf1908cdb97\" returns successfully" Feb 13 19:28:04.853561 containerd[1523]: time="2025-02-13T19:28:04.853303302Z" level=info msg="StopPodSandbox for \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\"" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.891 [WARNING][5458] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f", Pod:"coredns-7db6d8ff4d-dshsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d399a20e75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.892 [INFO][5458] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.892 [INFO][5458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" iface="eth0" netns="" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.892 [INFO][5458] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.892 [INFO][5458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.916 [INFO][5465] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.916 [INFO][5465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.916 [INFO][5465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.926 [WARNING][5465] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.926 [INFO][5465] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.928 [INFO][5465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:04.933371 containerd[1523]: 2025-02-13 19:28:04.930 [INFO][5458] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:04.933371 containerd[1523]: time="2025-02-13T19:28:04.933179231Z" level=info msg="TearDown network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\" successfully" Feb 13 19:28:04.933371 containerd[1523]: time="2025-02-13T19:28:04.933206312Z" level=info msg="StopPodSandbox for \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\" returns successfully" Feb 13 19:28:04.934523 containerd[1523]: time="2025-02-13T19:28:04.934195296Z" level=info msg="RemovePodSandbox for \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\"" Feb 13 19:28:04.934523 containerd[1523]: time="2025-02-13T19:28:04.934229778Z" level=info msg="Forcibly stopping sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\"" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.979 [WARNING][5487] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e9e0e70-1c2f-4d7c-8d2d-f775675262e1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0383957a0e3f0d2efd38dadb8139a4bc2692354cc2f02c1d7e122e101820351f", Pod:"coredns-7db6d8ff4d-dshsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d399a20e75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.979 [INFO][5487] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.979 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" iface="eth0" netns="" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.979 [INFO][5487] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.979 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.999 [INFO][5494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.999 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:04.999 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:05.008 [WARNING][5494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:05.008 [INFO][5494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" HandleID="k8s-pod-network.cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Workload="localhost-k8s-coredns--7db6d8ff4d--dshsv-eth0" Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:05.010 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.013022 containerd[1523]: 2025-02-13 19:28:05.011 [INFO][5487] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527" Feb 13 19:28:05.013429 containerd[1523]: time="2025-02-13T19:28:05.013064593Z" level=info msg="TearDown network for sandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\" successfully" Feb 13 19:28:05.016138 containerd[1523]: time="2025-02-13T19:28:05.016098586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:05.016201 containerd[1523]: time="2025-02-13T19:28:05.016187352Z" level=info msg="RemovePodSandbox \"cca26deea37775681d4dcdc62308aaf82b7b208fef6ba8ed5c8519aa7a763527\" returns successfully" Feb 13 19:28:05.029384 containerd[1523]: time="2025-02-13T19:28:05.029311946Z" level=info msg="StopPodSandbox for \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\"" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.071 [WARNING][5517] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3fc815c-da0d-4681-bd91-c9162f51c3d8", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba", Pod:"calico-apiserver-554b9784b8-gpzrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae9a19791bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.071 [INFO][5517] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.071 [INFO][5517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" iface="eth0" netns="" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.072 [INFO][5517] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.072 [INFO][5517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.090 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.090 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.090 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.098 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.098 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.099 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.102753 containerd[1523]: 2025-02-13 19:28:05.101 [INFO][5517] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.102753 containerd[1523]: time="2025-02-13T19:28:05.102562446Z" level=info msg="TearDown network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\" successfully" Feb 13 19:28:05.102753 containerd[1523]: time="2025-02-13T19:28:05.102591968Z" level=info msg="StopPodSandbox for \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\" returns successfully" Feb 13 19:28:05.103853 containerd[1523]: time="2025-02-13T19:28:05.103342335Z" level=info msg="RemovePodSandbox for \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\"" Feb 13 19:28:05.103853 containerd[1523]: time="2025-02-13T19:28:05.103398379Z" level=info msg="Forcibly stopping sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\"" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.143 [WARNING][5548] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0", GenerateName:"calico-apiserver-554b9784b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3fc815c-da0d-4681-bd91-c9162f51c3d8", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b9784b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b609ec8b55a26536abec254bf96b16a664c2f2f89f0d0186c35938b5d6c1faba", Pod:"calico-apiserver-554b9784b8-gpzrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae9a19791bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.144 [INFO][5548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.144 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" iface="eth0" netns="" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.144 [INFO][5548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.144 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.164 [INFO][5556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.164 [INFO][5556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.164 [INFO][5556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.172 [WARNING][5556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.172 [INFO][5556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" HandleID="k8s-pod-network.63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Workload="localhost-k8s-calico--apiserver--554b9784b8--gpzrw-eth0" Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.174 [INFO][5556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.176736 containerd[1523]: 2025-02-13 19:28:05.175 [INFO][5548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46" Feb 13 19:28:05.177184 containerd[1523]: time="2025-02-13T19:28:05.176773806Z" level=info msg="TearDown network for sandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\" successfully" Feb 13 19:28:05.179692 containerd[1523]: time="2025-02-13T19:28:05.179628428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:05.179759 containerd[1523]: time="2025-02-13T19:28:05.179695992Z" level=info msg="RemovePodSandbox \"63830bbbd2748d8372d7a0d82c184faed1775b3b9bb335ca6b64124c3ace3f46\" returns successfully" Feb 13 19:28:05.180260 containerd[1523]: time="2025-02-13T19:28:05.180233666Z" level=info msg="StopPodSandbox for \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\"" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.220 [WARNING][5579] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0", GenerateName:"calico-kube-controllers-cd8b599d7-", Namespace:"calico-system", SelfLink:"", UID:"6023b2dc-e788-490d-952a-daba9fbad29a", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8b599d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2", Pod:"calico-kube-controllers-cd8b599d7-g4h4n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c9055c117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.221 [INFO][5579] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.221 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" iface="eth0" netns="" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.221 [INFO][5579] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.221 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.242 [INFO][5588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.242 [INFO][5588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.242 [INFO][5588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.250 [WARNING][5588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.250 [INFO][5588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.252 [INFO][5588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.256868 containerd[1523]: 2025-02-13 19:28:05.254 [INFO][5579] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.257335 containerd[1523]: time="2025-02-13T19:28:05.256835099Z" level=info msg="TearDown network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\" successfully" Feb 13 19:28:05.257335 containerd[1523]: time="2025-02-13T19:28:05.257061873Z" level=info msg="StopPodSandbox for \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\" returns successfully" Feb 13 19:28:05.257861 containerd[1523]: time="2025-02-13T19:28:05.257514662Z" level=info msg="RemovePodSandbox for \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\"" Feb 13 19:28:05.257861 containerd[1523]: time="2025-02-13T19:28:05.257549784Z" level=info msg="Forcibly stopping sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\"" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.292 [WARNING][5610] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0", GenerateName:"calico-kube-controllers-cd8b599d7-", Namespace:"calico-system", SelfLink:"", UID:"6023b2dc-e788-490d-952a-daba9fbad29a", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8b599d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8e377aef247dd9a55c62ba172c5257070f30e94625c077f4e84add73adeccf2", Pod:"calico-kube-controllers-cd8b599d7-g4h4n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c9055c117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.292 [INFO][5610] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.292 [INFO][5610] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" iface="eth0" netns="" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.292 [INFO][5610] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.292 [INFO][5610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.310 [INFO][5617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.310 [INFO][5617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.310 [INFO][5617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.318 [WARNING][5617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.318 [INFO][5617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" HandleID="k8s-pod-network.197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Workload="localhost-k8s-calico--kube--controllers--cd8b599d7--g4h4n-eth0" Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.320 [INFO][5617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.322660 containerd[1523]: 2025-02-13 19:28:05.321 [INFO][5610] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8" Feb 13 19:28:05.323088 containerd[1523]: time="2025-02-13T19:28:05.322703048Z" level=info msg="TearDown network for sandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\" successfully" Feb 13 19:28:05.336982 containerd[1523]: time="2025-02-13T19:28:05.336918192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:05.337093 containerd[1523]: time="2025-02-13T19:28:05.337000678Z" level=info msg="RemovePodSandbox \"197186389d2b2aff3dfaa1d403b80fa4630bba35ff43ebe609c922f8bdc1fdc8\" returns successfully" Feb 13 19:28:05.337732 containerd[1523]: time="2025-02-13T19:28:05.337475108Z" level=info msg="StopPodSandbox for \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\"" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.370 [WARNING][5639] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"623df41b-21a5-4acd-90d5-1d14fa054355", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc", Pod:"coredns-7db6d8ff4d-hzzls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia782f573b65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.371 [INFO][5639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.371 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" iface="eth0" netns="" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.371 [INFO][5639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.371 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.388 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.388 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.388 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.396 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.396 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.397 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.400646 containerd[1523]: 2025-02-13 19:28:05.399 [INFO][5639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.401327 containerd[1523]: time="2025-02-13T19:28:05.401128197Z" level=info msg="TearDown network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\" successfully" Feb 13 19:28:05.401327 containerd[1523]: time="2025-02-13T19:28:05.401158999Z" level=info msg="StopPodSandbox for \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\" returns successfully" Feb 13 19:28:05.401639 containerd[1523]: time="2025-02-13T19:28:05.401613267Z" level=info msg="RemovePodSandbox for \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\"" Feb 13 19:28:05.401677 containerd[1523]: time="2025-02-13T19:28:05.401649670Z" level=info msg="Forcibly stopping sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\"" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.436 [WARNING][5668] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"623df41b-21a5-4acd-90d5-1d14fa054355", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9772ecb36104954e6368f1237c0824ebbe768c8325b32adce39e8d5d7bdecdc", Pod:"coredns-7db6d8ff4d-hzzls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia782f573b65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.437 [INFO][5668] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.437 [INFO][5668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" iface="eth0" netns="" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.437 [INFO][5668] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.437 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.458 [INFO][5675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.458 [INFO][5675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.458 [INFO][5675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.468 [WARNING][5675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.468 [INFO][5675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" HandleID="k8s-pod-network.7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Workload="localhost-k8s-coredns--7db6d8ff4d--hzzls-eth0" Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.469 [INFO][5675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.474019 containerd[1523]: 2025-02-13 19:28:05.471 [INFO][5668] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30" Feb 13 19:28:05.474019 containerd[1523]: time="2025-02-13T19:28:05.472391169Z" level=info msg="TearDown network for sandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\" successfully" Feb 13 19:28:05.476023 containerd[1523]: time="2025-02-13T19:28:05.475931155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:05.476170 containerd[1523]: time="2025-02-13T19:28:05.476147008Z" level=info msg="RemovePodSandbox \"7b6fdd3342a8de083710789b383ae90240fbd427dfb58cb1fec0ee537a00fb30\" returns successfully" Feb 13 19:28:05.476861 containerd[1523]: time="2025-02-13T19:28:05.476830492Z" level=info msg="StopPodSandbox for \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\"" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.512 [WARNING][5698] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7x2p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"046604b2-014e-4614-a6a9-a156d305f1ec", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d", Pod:"csi-node-driver-c7x2p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87efc7103dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.512 [INFO][5698] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.512 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" iface="eth0" netns="" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.512 [INFO][5698] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.512 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.541 [INFO][5705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.541 [INFO][5705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.541 [INFO][5705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.553 [WARNING][5705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.553 [INFO][5705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.554 [INFO][5705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.556984 containerd[1523]: 2025-02-13 19:28:05.555 [INFO][5698] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.556984 containerd[1523]: time="2025-02-13T19:28:05.556744615Z" level=info msg="TearDown network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\" successfully" Feb 13 19:28:05.556984 containerd[1523]: time="2025-02-13T19:28:05.556801819Z" level=info msg="StopPodSandbox for \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\" returns successfully" Feb 13 19:28:05.557380 containerd[1523]: time="2025-02-13T19:28:05.557252767Z" level=info msg="RemovePodSandbox for \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\"" Feb 13 19:28:05.557380 containerd[1523]: time="2025-02-13T19:28:05.557284209Z" level=info msg="Forcibly stopping sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\"" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.598 [WARNING][5727] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7x2p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"046604b2-014e-4614-a6a9-a156d305f1ec", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1e8ed8d418766f95ed4f7bd9d0257314331cd14590a1c92124ab060d906749d", Pod:"csi-node-driver-c7x2p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87efc7103dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.598 [INFO][5727] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.598 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" iface="eth0" netns="" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.598 [INFO][5727] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.598 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.616 [INFO][5735] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.616 [INFO][5735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.616 [INFO][5735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.624 [WARNING][5735] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.624 [INFO][5735] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" HandleID="k8s-pod-network.6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Workload="localhost-k8s-csi--node--driver--c7x2p-eth0" Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.626 [INFO][5735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:28:05.628523 containerd[1523]: 2025-02-13 19:28:05.627 [INFO][5727] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8" Feb 13 19:28:05.628989 containerd[1523]: time="2025-02-13T19:28:05.628561263Z" level=info msg="TearDown network for sandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\" successfully" Feb 13 19:28:05.631328 containerd[1523]: time="2025-02-13T19:28:05.631296757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:05.631379 containerd[1523]: time="2025-02-13T19:28:05.631352601Z" level=info msg="RemovePodSandbox \"6050c2fb200f0e59c614132241f331e9eb54abd9a2d1cb228db484fe1d274ff8\" returns successfully" Feb 13 19:28:05.809231 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:55528.service - OpenSSH per-connection server daemon (10.0.0.1:55528). Feb 13 19:28:05.849733 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 55528 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:28:05.850569 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:05.870043 systemd-logind[1501]: New session 19 of user core. Feb 13 19:28:05.879284 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:28:06.030203 sshd[5742]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:06.034038 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:55528.service: Deactivated successfully. Feb 13 19:28:06.037404 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:28:06.037822 systemd-logind[1501]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:28:06.039391 systemd-logind[1501]: Removed session 19. Feb 13 19:28:07.768861 kubelet[2692]: I0213 19:28:07.768592 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:28:11.044268 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:55534.service - OpenSSH per-connection server daemon (10.0.0.1:55534). Feb 13 19:28:11.087352 sshd[5759]: Accepted publickey for core from 10.0.0.1 port 55534 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:28:11.088717 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:11.092471 systemd-logind[1501]: New session 20 of user core. Feb 13 19:28:11.097285 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:28:11.225295 sshd[5759]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:11.228543 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:55534.service: Deactivated successfully. Feb 13 19:28:11.230539 systemd-logind[1501]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:28:11.230605 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:28:11.232044 systemd-logind[1501]: Removed session 20.