Feb 13 19:58:19.901148 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:58:19.901170 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:58:19.901179 kernel: KASLR enabled Feb 13 19:58:19.901185 kernel: efi: EFI v2.7 by EDK II Feb 13 19:58:19.901191 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:58:19.901197 kernel: random: crng init done Feb 13 19:58:19.901204 kernel: ACPI: Early table checksum verification disabled Feb 13 19:58:19.901210 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:58:19.901216 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:58:19.901224 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901230 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901237 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901243 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901249 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901256 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901264 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901270 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901277 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:19.901283 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:58:19.901290 kernel: NUMA: Failed to initialise from firmware Feb 13 19:58:19.901296 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:58:19.901302 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:58:19.901309 kernel: Zone ranges: Feb 13 19:58:19.901315 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:58:19.901321 kernel: DMA32 empty Feb 13 19:58:19.901329 kernel: Normal empty Feb 13 19:58:19.901335 kernel: Movable zone start for each node Feb 13 19:58:19.901341 kernel: Early memory node ranges Feb 13 19:58:19.901348 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:58:19.901354 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:58:19.901360 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:58:19.901366 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:58:19.901372 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:58:19.901379 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:58:19.901385 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:58:19.901392 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:58:19.901398 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:58:19.901406 kernel: psci: probing for conduit method from ACPI. Feb 13 19:58:19.901413 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:58:19.901420 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:58:19.901433 kernel: psci: Trusted OS migration not required Feb 13 19:58:19.901440 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:58:19.901460 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:58:19.901471 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:58:19.901480 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:58:19.901489 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:58:19.901497 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:58:19.901504 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:58:19.901510 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:58:19.901517 kernel: CPU features: detected: Spectre-v4 Feb 13 19:58:19.901524 kernel: CPU features: detected: Spectre-BHB Feb 13 19:58:19.901531 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:58:19.901538 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:58:19.901546 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:58:19.901552 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:58:19.901560 kernel: alternatives: applying boot alternatives Feb 13 19:58:19.901568 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:58:19.901575 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:58:19.901588 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:58:19.901597 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:58:19.901603 kernel: Fallback order for Node 0: 0 Feb 13 19:58:19.901610 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:58:19.901617 kernel: Policy zone: DMA Feb 13 19:58:19.901623 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:58:19.901631 kernel: software IO TLB: area num 4. Feb 13 19:58:19.901638 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:58:19.901646 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 19:58:19.901653 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:58:19.901660 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:58:19.901668 kernel: rcu: RCU event tracing is enabled. Feb 13 19:58:19.901674 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:58:19.901681 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:58:19.901688 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:58:19.901695 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:58:19.901702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:58:19.901708 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:58:19.901717 kernel: GICv3: 256 SPIs implemented Feb 13 19:58:19.901723 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:58:19.901730 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:58:19.901736 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:58:19.901743 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:58:19.901750 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:58:19.901757 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:58:19.901764 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:58:19.901770 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:58:19.901777 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:58:19.901784 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:58:19.901792 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:58:19.901799 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:58:19.901806 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:58:19.901813 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:58:19.901819 kernel: arm-pv: using stolen time PV Feb 13 19:58:19.901826 kernel: Console: colour dummy device 80x25 Feb 13 19:58:19.901833 kernel: ACPI: Core revision 20230628 Feb 13 19:58:19.901841 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:58:19.901847 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:58:19.901855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:58:19.901863 kernel: landlock: Up and running. Feb 13 19:58:19.901870 kernel: SELinux: Initializing. Feb 13 19:58:19.901877 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:58:19.901890 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:58:19.901898 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:58:19.901905 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:58:19.901912 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:58:19.901919 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:58:19.901926 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:58:19.901934 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:58:19.901941 kernel: Remapping and enabling EFI services. Feb 13 19:58:19.901948 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:58:19.901956 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:58:19.901963 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:58:19.901971 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:58:19.901978 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:58:19.901985 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:58:19.901992 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:58:19.901998 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:58:19.902012 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:58:19.902022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:58:19.902033 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:58:19.902042 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:58:19.902049 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:58:19.902056 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:58:19.902064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:58:19.902071 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:58:19.902078 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:58:19.902087 kernel: SMP: Total of 4 processors activated. Feb 13 19:58:19.902094 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:58:19.902101 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:58:19.902109 kernel: CPU features: detected: Common not Private translations Feb 13 19:58:19.902116 kernel: CPU features: detected: CRC32 instructions Feb 13 19:58:19.902124 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:58:19.902131 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:58:19.902138 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:58:19.902147 kernel: CPU features: detected: Privileged Access Never Feb 13 19:58:19.902154 kernel: CPU features: detected: RAS Extension Support Feb 13 19:58:19.902161 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:58:19.902169 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:58:19.902176 kernel: alternatives: applying system-wide alternatives Feb 13 19:58:19.902183 kernel: devtmpfs: initialized Feb 13 19:58:19.902191 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:58:19.902198 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:58:19.902205 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:58:19.902214 kernel: SMBIOS 3.0.0 present. Feb 13 19:58:19.902221 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:58:19.902228 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:58:19.902241 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:58:19.902248 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:58:19.902255 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:58:19.902263 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:58:19.902270 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Feb 13 19:58:19.902277 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:58:19.902286 kernel: cpuidle: using governor menu Feb 13 19:58:19.902293 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:58:19.902301 kernel: ASID allocator initialised with 32768 entries Feb 13 19:58:19.902308 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:58:19.902315 kernel: Serial: AMBA PL011 UART driver Feb 13 19:58:19.902322 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:58:19.902329 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:58:19.902337 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:58:19.902344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:58:19.902352 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:58:19.902360 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:58:19.902367 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:58:19.902374 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:58:19.902382 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:58:19.902389 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:58:19.902396 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:58:19.902404 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:58:19.902411 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:58:19.902420 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:58:19.902427 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:58:19.902434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:58:19.902442 kernel: ACPI: Interpreter enabled Feb 13 19:58:19.902449 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:58:19.902456 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:58:19.902464 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:58:19.902471 kernel: printk: console [ttyAMA0] enabled Feb 13 19:58:19.902478 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:58:19.902630 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:58:19.902714 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:58:19.902783 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:58:19.902849 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:58:19.902923 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:58:19.902934 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:58:19.902941 kernel: PCI host bridge to bus 0000:00 Feb 13 19:58:19.903018 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:58:19.903081 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:58:19.903142 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:58:19.903203 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:58:19.903282 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:58:19.903383 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:58:19.903460 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:58:19.903527 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:58:19.903607 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:58:19.903679 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:58:19.903745 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:58:19.903811 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:58:19.903873 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:58:19.903944 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:58:19.904004 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:58:19.904013 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:58:19.904021 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:58:19.904028 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:58:19.904036 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:58:19.904043 kernel: iommu: Default domain type: Translated Feb 13 19:58:19.904050 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:58:19.904058 kernel: efivars: Registered efivars operations Feb 13 19:58:19.904067 kernel: vgaarb: loaded Feb 13 19:58:19.904075 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:58:19.904082 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:58:19.904089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:58:19.904096 kernel: pnp: PnP ACPI init Feb 13 19:58:19.904172 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:58:19.904183 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:58:19.904190 kernel: NET: Registered PF_INET protocol family Feb 13 19:58:19.904200 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:58:19.904208 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:58:19.904215 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:58:19.904222 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:58:19.904230 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:58:19.904237 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:58:19.904245 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:58:19.904252 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:58:19.904259 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:58:19.904268 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:58:19.904276 kernel: kvm [1]: HYP mode not available Feb 13 19:58:19.904283 kernel: Initialise system trusted keyrings Feb 13 19:58:19.904291 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:58:19.904298 kernel: Key type asymmetric registered Feb 13 19:58:19.904305 kernel: Asymmetric key parser 'x509' registered Feb 13 19:58:19.904312 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:58:19.904319 kernel: io scheduler mq-deadline registered Feb 13 19:58:19.904327 kernel: io scheduler kyber registered Feb 13 19:58:19.904335 kernel: io scheduler bfq registered Feb 13 19:58:19.904342 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:58:19.904350 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:58:19.904357 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:58:19.904423 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:58:19.904433 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:58:19.904440 kernel: thunder_xcv, ver 1.0 Feb 13 19:58:19.904448 kernel: thunder_bgx, ver 1.0 Feb 13 19:58:19.904455 kernel: nicpf, ver 1.0 Feb 13 19:58:19.904464 kernel: nicvf, ver 1.0 Feb 13 19:58:19.904537 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:58:19.904622 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:58:19 UTC (1739476699) Feb 13 19:58:19.904644 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:58:19.904651 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:58:19.904659 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:58:19.904666 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:58:19.904674 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:58:19.904683 kernel: Segment Routing with IPv6 Feb 13 19:58:19.904691 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:58:19.904698 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:58:19.904705 kernel: Key type dns_resolver registered Feb 13 19:58:19.904713 kernel: registered taskstats version 1 Feb 13 19:58:19.904720 kernel: Loading compiled-in X.509 certificates Feb 13 19:58:19.904727 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:58:19.904735 kernel: Key type .fscrypt registered Feb 13 19:58:19.904742 kernel: Key type fscrypt-provisioning registered Feb 13 19:58:19.904751 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:58:19.904758 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:58:19.904765 kernel: ima: No architecture policies found Feb 13 19:58:19.904772 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:58:19.904780 kernel: clk: Disabling unused clocks Feb 13 19:58:19.904787 kernel: Freeing unused kernel memory: 39360K Feb 13 19:58:19.904794 kernel: Run /init as init process Feb 13 19:58:19.904801 kernel: with arguments: Feb 13 19:58:19.904808 kernel: /init Feb 13 19:58:19.904817 kernel: with environment: Feb 13 19:58:19.904824 kernel: HOME=/ Feb 13 19:58:19.904831 kernel: TERM=linux Feb 13 19:58:19.904838 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:58:19.904847 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:58:19.904857 systemd[1]: Detected virtualization kvm. Feb 13 19:58:19.904865 systemd[1]: Detected architecture arm64. Feb 13 19:58:19.904874 systemd[1]: Running in initrd. Feb 13 19:58:19.904881 systemd[1]: No hostname configured, using default hostname. Feb 13 19:58:19.904896 systemd[1]: Hostname set to . Feb 13 19:58:19.904904 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:58:19.904912 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:58:19.904920 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:58:19.904928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:58:19.904936 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:58:19.904946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:58:19.904954 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:58:19.904962 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:58:19.904971 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:58:19.904979 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:58:19.904987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:58:19.904994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:58:19.905003 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:58:19.905011 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:58:19.905019 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:58:19.905027 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:58:19.905034 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:58:19.905042 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:58:19.905050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:58:19.905058 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:58:19.905066 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:58:19.905075 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:58:19.905083 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:58:19.905091 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:58:19.905098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:58:19.905106 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:58:19.905114 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:58:19.905122 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:58:19.905129 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:58:19.905138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:58:19.905146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:19.905154 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:58:19.905162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:58:19.905170 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:58:19.905178 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:58:19.905205 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:58:19.905224 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:58:19.905232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:19.905242 kernel: Bridge firewalling registered Feb 13 19:58:19.905249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:58:19.905257 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:58:19.905266 systemd-journald[238]: Journal started Feb 13 19:58:19.905284 systemd-journald[238]: Runtime Journal (/run/log/journal/4199d94d21e240558108f82529eb643a) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:58:19.886557 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:58:19.901975 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:58:19.910052 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:58:19.910429 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:58:19.914925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:58:19.916197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:58:19.920764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:58:19.922979 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:19.925251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:58:19.927853 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:58:19.929364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:58:19.932921 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:58:19.935651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:58:19.942275 dracut-cmdline[272]: dracut-dracut-053 Feb 13 19:58:19.944767 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:58:19.963524 systemd-resolved[278]: Positive Trust Anchors: Feb 13 19:58:19.963542 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:58:19.963573 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:58:19.968353 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 19:58:19.969307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:58:19.970516 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:58:20.014619 kernel: SCSI subsystem initialized Feb 13 19:58:20.018605 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:58:20.027609 kernel: iscsi: registered transport (tcp) Feb 13 19:58:20.042610 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:58:20.042647 kernel: QLogic iSCSI HBA Driver Feb 13 19:58:20.085308 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:58:20.094797 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:58:20.111298 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:58:20.111340 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:58:20.111357 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:58:20.158634 kernel: raid6: neonx8 gen() 15654 MB/s Feb 13 19:58:20.175624 kernel: raid6: neonx4 gen() 15546 MB/s Feb 13 19:58:20.192603 kernel: raid6: neonx2 gen() 13158 MB/s Feb 13 19:58:20.209601 kernel: raid6: neonx1 gen() 10442 MB/s Feb 13 19:58:20.226605 kernel: raid6: int64x8 gen() 6889 MB/s Feb 13 19:58:20.243619 kernel: raid6: int64x4 gen() 7309 MB/s Feb 13 19:58:20.260607 kernel: raid6: int64x2 gen() 6093 MB/s Feb 13 19:58:20.277617 kernel: raid6: int64x1 gen() 5030 MB/s Feb 13 19:58:20.277647 kernel: raid6: using algorithm neonx8 gen() 15654 MB/s Feb 13 19:58:20.294612 kernel: raid6: .... xor() 11877 MB/s, rmw enabled Feb 13 19:58:20.294654 kernel: raid6: using neon recovery algorithm Feb 13 19:58:20.299608 kernel: xor: measuring software checksum speed Feb 13 19:58:20.299636 kernel: 8regs : 19478 MB/sec Feb 13 19:58:20.300635 kernel: 32regs : 17286 MB/sec Feb 13 19:58:20.300649 kernel: arm64_neon : 27186 MB/sec Feb 13 19:58:20.300658 kernel: xor: using function: arm64_neon (27186 MB/sec) Feb 13 19:58:20.352627 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:58:20.364651 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:58:20.374724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:58:20.385821 systemd-udevd[459]: Using default interface naming scheme 'v255'. Feb 13 19:58:20.388961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:58:20.391954 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:58:20.406344 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Feb 13 19:58:20.431535 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:58:20.441734 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:58:20.482856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:58:20.493782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:58:20.505638 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:58:20.506866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:58:20.508361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:58:20.509916 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:58:20.522797 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:58:20.536964 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:58:20.544470 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:58:20.544574 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:58:20.544602 kernel: GPT:9289727 != 19775487 Feb 13 19:58:20.544613 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:58:20.544622 kernel: GPT:9289727 != 19775487 Feb 13 19:58:20.544631 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:58:20.544643 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:58:20.537105 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:58:20.540374 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:58:20.540475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:20.544289 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:58:20.545112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:58:20.545234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:20.546078 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:20.556791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:20.564613 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (517) Feb 13 19:58:20.567613 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (505) Feb 13 19:58:20.569578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:20.575199 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:58:20.582627 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:58:20.586168 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:58:20.587151 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:58:20.592136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:58:20.602715 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:58:20.604190 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:58:20.608443 disk-uuid[549]: Primary Header is updated. Feb 13 19:58:20.608443 disk-uuid[549]: Secondary Entries is updated. Feb 13 19:58:20.608443 disk-uuid[549]: Secondary Header is updated. Feb 13 19:58:20.611598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:58:20.630360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:21.627397 disk-uuid[550]: The operation has completed successfully. Feb 13 19:58:21.628296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:58:21.653561 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:58:21.653682 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:58:21.667798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:58:21.670697 sh[574]: Success Feb 13 19:58:21.680604 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:58:21.710537 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:58:21.723929 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:58:21.727615 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:58:21.734913 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:58:21.734948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:58:21.734966 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:58:21.736664 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:58:21.736706 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:58:21.740046 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:58:21.741127 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:58:21.750730 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:58:21.752015 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:58:21.758218 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:58:21.758255 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:58:21.758267 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:58:21.760603 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:58:21.769572 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:58:21.771083 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:58:21.776361 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:58:21.782769 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:58:21.847066 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:58:21.854759 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:58:21.875785 systemd-networkd[765]: lo: Link UP Feb 13 19:58:21.875794 systemd-networkd[765]: lo: Gained carrier Feb 13 19:58:21.876482 systemd-networkd[765]: Enumeration completed Feb 13 19:58:21.877109 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:58:21.878385 systemd[1]: Reached target network.target - Network. Feb 13 19:58:21.879642 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:21.879646 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:58:21.880455 systemd-networkd[765]: eth0: Link UP Feb 13 19:58:21.880458 systemd-networkd[765]: eth0: Gained carrier Feb 13 19:58:21.880465 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:21.885818 ignition[666]: Ignition 2.19.0 Feb 13 19:58:21.885824 ignition[666]: Stage: fetch-offline Feb 13 19:58:21.885857 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:21.885865 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:21.886027 ignition[666]: parsed url from cmdline: "" Feb 13 19:58:21.886030 ignition[666]: no config URL provided Feb 13 19:58:21.886034 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:58:21.886042 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:58:21.886063 ignition[666]: op(1): [started] loading QEMU firmware config module Feb 13 19:58:21.886068 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:58:21.896480 ignition[666]: op(1): [finished] loading QEMU firmware config module Feb 13 19:58:21.896629 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:58:21.934740 ignition[666]: parsing config with SHA512: 431b4b4049d0b954049e71663b572f7afcfd98dd1b246117f46d075e3f84f5bbcbbcf47779c037484fb9f3a6bb3f2795406712a5c27f25205ba0973ba32015c2 Feb 13 19:58:21.939981 unknown[666]: fetched base config from "system" Feb 13 19:58:21.939997 unknown[666]: fetched user config from "qemu" Feb 13 19:58:21.940956 ignition[666]: fetch-offline: fetch-offline passed Feb 13 19:58:21.941029 ignition[666]: Ignition finished successfully Feb 13 19:58:21.942280 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:58:21.943537 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:58:21.954771 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:58:21.964654 ignition[774]: Ignition 2.19.0 Feb 13 19:58:21.964663 ignition[774]: Stage: kargs Feb 13 19:58:21.964816 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:21.964826 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:21.965676 ignition[774]: kargs: kargs passed Feb 13 19:58:21.967306 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:58:21.965721 ignition[774]: Ignition finished successfully Feb 13 19:58:21.969752 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:58:21.981511 ignition[783]: Ignition 2.19.0 Feb 13 19:58:21.981523 ignition[783]: Stage: disks Feb 13 19:58:21.981698 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:21.981706 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:21.982541 ignition[783]: disks: disks passed Feb 13 19:58:21.983975 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:58:21.982603 ignition[783]: Ignition finished successfully Feb 13 19:58:21.985273 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:58:21.986390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:58:21.988450 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:58:21.989753 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:58:21.991183 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:58:22.002712 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:58:22.012026 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:58:22.015871 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:58:22.026676 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:58:22.071606 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:58:22.072201 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:58:22.073212 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:58:22.084689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:58:22.086240 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:58:22.087187 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:58:22.087280 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:58:22.087309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:58:22.093196 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Feb 13 19:58:22.092986 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:58:22.096241 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:58:22.096257 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:58:22.096267 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:58:22.095929 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:58:22.099605 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:58:22.100324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:58:22.140077 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:58:22.143774 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:58:22.147656 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:58:22.151479 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:58:22.220622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:58:22.227709 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:58:22.229004 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:58:22.233727 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:58:22.250457 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:58:22.252329 ignition[915]: INFO : Ignition 2.19.0 Feb 13 19:58:22.252329 ignition[915]: INFO : Stage: mount Feb 13 19:58:22.252329 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:22.252329 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:22.252329 ignition[915]: INFO : mount: mount passed Feb 13 19:58:22.252329 ignition[915]: INFO : Ignition finished successfully Feb 13 19:58:22.252897 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:58:22.265673 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:58:22.734264 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:58:22.742775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:58:22.747612 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Feb 13 19:58:22.749621 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:58:22.749643 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:58:22.750596 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:58:22.752620 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:58:22.753078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:58:22.767883 ignition[946]: INFO : Ignition 2.19.0 Feb 13 19:58:22.767883 ignition[946]: INFO : Stage: files Feb 13 19:58:22.769108 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:22.769108 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:22.769108 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:58:22.771565 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:58:22.771565 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:58:22.771565 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:58:22.771565 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:58:22.775461 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:58:22.775461 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:58:22.775461 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:58:22.771744 unknown[946]: wrote ssh authorized keys file for user: core Feb 13 19:58:22.821561 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:58:22.975902 systemd-networkd[765]: eth0: Gained IPv6LL Feb 13 19:58:23.013884 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:58:23.015217 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:58:23.026082 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:58:23.026082 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:58:23.026082 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:58:23.026082 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:58:23.026082 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:58:23.343563 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:58:23.877329 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:58:23.877329 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:58:23.880156 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:58:23.900083 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:58:23.903362 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:58:23.905624 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:58:23.905624 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:58:23.905624 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:58:23.905624 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:58:23.905624 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:58:23.905624 ignition[946]: INFO : files: files passed Feb 13 19:58:23.905624 ignition[946]: INFO : Ignition finished successfully Feb 13 19:58:23.905974 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:58:23.915736 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:58:23.917710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:58:23.918959 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:58:23.919039 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:58:23.925011 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:58:23.928292 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:58:23.928292 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:58:23.930999 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:58:23.932352 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:58:23.935136 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:58:23.940721 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:58:23.958633 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:58:23.959368 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:58:23.961367 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:58:23.962152 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:58:23.962940 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:58:23.963604 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:58:23.978480 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:58:23.980509 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:58:23.990735 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:58:23.991985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:58:23.993682 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:58:23.995208 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:58:23.995319 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:58:23.997326 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:58:23.999027 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:58:24.000463 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:58:24.002122 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:58:24.003742 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:58:24.005399 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:58:24.006952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:58:24.008624 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:58:24.010282 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:58:24.011728 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:58:24.012978 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:58:24.013086 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:58:24.015005 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:58:24.016689 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:58:24.018386 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:58:24.018687 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:58:24.020187 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:58:24.020302 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:58:24.022504 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:58:24.022635 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:58:24.024310 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:58:24.025617 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:58:24.026641 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:58:24.028171 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:58:24.029577 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:58:24.031056 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:58:24.031140 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:58:24.032825 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:58:24.032929 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:58:24.034250 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:58:24.034355 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:58:24.035938 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:58:24.036041 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:58:24.047738 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:58:24.048674 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:58:24.048811 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:58:24.053819 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:58:24.055668 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:58:24.055802 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:58:24.060124 ignition[1000]: INFO : Ignition 2.19.0 Feb 13 19:58:24.060124 ignition[1000]: INFO : Stage: umount Feb 13 19:58:24.060124 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:24.060124 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:24.060124 ignition[1000]: INFO : umount: umount passed Feb 13 19:58:24.060124 ignition[1000]: INFO : Ignition finished successfully Feb 13 19:58:24.057267 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:58:24.057370 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:58:24.061061 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:58:24.061144 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:58:24.062697 systemd[1]: Stopped target network.target - Network. Feb 13 19:58:24.064256 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:58:24.064362 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:58:24.065530 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:58:24.065571 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:58:24.066784 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:58:24.066821 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:58:24.068155 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:58:24.068194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:58:24.069785 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:58:24.070999 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:58:24.073117 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:58:24.073705 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:58:24.073784 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:58:24.078891 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:58:24.078990 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:58:24.081548 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:58:24.081642 systemd-networkd[765]: eth0: DHCPv6 lease lost Feb 13 19:58:24.081803 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:58:24.083702 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:58:24.083803 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:58:24.085461 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:58:24.085515 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:58:24.097721 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:58:24.098447 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:58:24.098504 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:58:24.099940 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:58:24.099980 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:58:24.101305 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:58:24.101347 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:58:24.102976 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:58:24.112033 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:58:24.112138 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:58:24.116182 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:58:24.116316 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:58:24.118154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:58:24.118194 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:58:24.119407 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:58:24.119435 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:58:24.120695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:58:24.120743 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:58:24.122916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:58:24.122965 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:58:24.125078 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:58:24.125123 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:24.141803 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:58:24.142615 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:58:24.142671 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:58:24.144305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:58:24.144345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:24.145968 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:58:24.146066 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:58:24.147267 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:58:24.147347 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:58:24.149079 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:58:24.149883 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:58:24.149945 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:58:24.151835 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:58:24.161098 systemd[1]: Switching root. Feb 13 19:58:24.187336 systemd-journald[238]: Journal stopped Feb 13 19:58:24.835677 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:58:24.835731 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:58:24.835744 kernel: SELinux: policy capability open_perms=1 Feb 13 19:58:24.835754 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:58:24.835769 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:58:24.835779 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:58:24.835789 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:58:24.835803 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:58:24.835813 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:58:24.835829 kernel: audit: type=1403 audit(1739476704.319:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:58:24.835841 systemd[1]: Successfully loaded SELinux policy in 29.568ms. Feb 13 19:58:24.835869 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.518ms. Feb 13 19:58:24.835882 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:58:24.835893 systemd[1]: Detected virtualization kvm. Feb 13 19:58:24.835904 systemd[1]: Detected architecture arm64. Feb 13 19:58:24.835914 systemd[1]: Detected first boot. Feb 13 19:58:24.835927 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:58:24.835938 zram_generator::config[1044]: No configuration found. Feb 13 19:58:24.835949 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:58:24.835959 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:58:24.835970 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:58:24.835980 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:58:24.835994 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:58:24.836005 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:58:24.836017 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:58:24.836027 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:58:24.836040 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:58:24.836051 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:58:24.836061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:58:24.836072 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:58:24.836084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:58:24.836094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:58:24.836105 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:58:24.836118 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:58:24.836129 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:58:24.836140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:58:24.836150 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:58:24.836161 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:58:24.836171 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:58:24.836182 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:58:24.836193 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:58:24.836205 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:58:24.836215 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:58:24.836226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:58:24.836237 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:58:24.836247 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:58:24.836257 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:58:24.836268 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:58:24.836278 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:58:24.836292 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:58:24.836304 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:58:24.836316 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:58:24.836326 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:58:24.836337 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:58:24.836347 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:58:24.836357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:58:24.836368 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:58:24.836378 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:58:24.836389 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:58:24.836401 systemd[1]: Reached target machines.target - Containers. Feb 13 19:58:24.836411 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:58:24.836423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:58:24.836433 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:58:24.836444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:58:24.836459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:58:24.836470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:58:24.836481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:58:24.836493 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:58:24.836503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:58:24.836515 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:58:24.836525 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:58:24.836535 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:58:24.836547 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:58:24.836557 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:58:24.836568 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:58:24.836578 kernel: fuse: init (API version 7.39) Feb 13 19:58:24.836600 kernel: loop: module loaded Feb 13 19:58:24.836613 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:58:24.836624 kernel: ACPI: bus type drm_connector registered Feb 13 19:58:24.836636 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:58:24.836647 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:58:24.836657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:58:24.836668 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:58:24.836679 systemd[1]: Stopped verity-setup.service. Feb 13 19:58:24.836689 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:58:24.836717 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 19:58:24.836738 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:58:24.836749 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:58:24.836759 systemd-journald[1111]: Journal started Feb 13 19:58:24.836781 systemd-journald[1111]: Runtime Journal (/run/log/journal/4199d94d21e240558108f82529eb643a) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:58:24.670296 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:58:24.685044 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:58:24.685376 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:58:24.839606 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:58:24.839818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:58:24.840699 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:58:24.841609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:58:24.843642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:58:24.844778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:58:24.845916 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:58:24.846047 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:58:24.847154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:58:24.847297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:58:24.848403 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:58:24.848527 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:58:24.849658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:58:24.850796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:58:24.851910 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:58:24.853619 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:58:24.854710 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:58:24.854833 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:58:24.855876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:58:24.856924 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:58:24.858124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:58:24.869267 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:58:24.874669 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:58:24.876378 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:58:24.877214 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:58:24.877240 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:58:24.878889 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:58:24.880723 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:58:24.882524 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:58:24.883428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:58:24.885760 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:58:24.890756 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:58:24.891922 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:58:24.892823 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:58:24.894279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:58:24.897755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:58:24.902635 systemd-journald[1111]: Time spent on flushing to /var/log/journal/4199d94d21e240558108f82529eb643a is 27.057ms for 852 entries. Feb 13 19:58:24.902635 systemd-journald[1111]: System Journal (/var/log/journal/4199d94d21e240558108f82529eb643a) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:58:24.942134 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 19:58:24.942208 kernel: loop0: detected capacity change from 0 to 189592 Feb 13 19:58:24.902823 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:58:24.910776 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:58:24.913057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:58:24.914250 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:58:24.915275 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:58:24.916508 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:58:24.917871 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:58:24.923462 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:58:24.932813 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:58:24.937222 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:58:24.939890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:58:24.943972 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:58:24.947613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:58:24.954894 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:58:24.958369 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:58:24.960428 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:58:24.962559 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:58:24.971625 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 19:58:24.971816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:58:24.990265 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 19:58:24.990285 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 19:58:24.994228 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:58:25.009605 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 19:58:25.040734 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 19:58:25.046611 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 19:58:25.050785 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 19:58:25.058941 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:58:25.059576 (sd-merge)[1179]: Merged extensions into '/usr'. Feb 13 19:58:25.064489 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:58:25.064503 systemd[1]: Reloading... Feb 13 19:58:25.118056 zram_generator::config[1202]: No configuration found. Feb 13 19:58:25.162408 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:58:25.219800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:58:25.255025 systemd[1]: Reloading finished in 190 ms. Feb 13 19:58:25.285841 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:58:25.287037 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:58:25.304751 systemd[1]: Starting ensure-sysext.service... Feb 13 19:58:25.306449 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:58:25.317495 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:58:25.317510 systemd[1]: Reloading... Feb 13 19:58:25.324332 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:58:25.324916 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:58:25.325662 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:58:25.325986 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 19:58:25.326116 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 19:58:25.328357 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:58:25.328672 systemd-tmpfiles[1240]: Skipping /boot Feb 13 19:58:25.336786 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:58:25.336901 systemd-tmpfiles[1240]: Skipping /boot Feb 13 19:58:25.362617 zram_generator::config[1270]: No configuration found. Feb 13 19:58:25.435822 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:58:25.471504 systemd[1]: Reloading finished in 153 ms. Feb 13 19:58:25.485682 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:58:25.499054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:58:25.508299 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:58:25.510339 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:58:25.511293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:58:25.512383 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:58:25.516729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:58:25.520160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:58:25.521313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:58:25.523421 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:58:25.526759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:58:25.531804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:58:25.535814 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:58:25.538062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:58:25.539408 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:58:25.541010 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:58:25.542703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:58:25.546314 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:58:25.546452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:58:25.558572 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:58:25.562736 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Feb 13 19:58:25.563683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:58:25.574267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:58:25.576229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:58:25.579616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:58:25.580516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:58:25.583026 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:58:25.588984 augenrules[1335]: No rules Feb 13 19:58:25.588875 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:58:25.590823 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:58:25.592321 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:58:25.593750 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:58:25.593892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:58:25.595306 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:58:25.599545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:58:25.599714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:58:25.601157 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:58:25.601272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:58:25.602646 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:58:25.605668 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:58:25.617606 systemd[1]: Finished ensure-sysext.service. Feb 13 19:58:25.624731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:58:25.633525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:58:25.637755 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:58:25.640666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:58:25.642838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:58:25.646661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:58:25.650607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1365) Feb 13 19:58:25.652772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:58:25.656615 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:58:25.659648 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:58:25.659960 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:58:25.661105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:58:25.661242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:58:25.662764 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:58:25.662918 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:58:25.665037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:58:25.665180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:58:25.668279 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:58:25.671650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:58:25.675832 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:58:25.681408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:58:25.681467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:58:25.694113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:58:25.699775 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:58:25.738595 systemd-resolved[1316]: Positive Trust Anchors: Feb 13 19:58:25.738609 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:58:25.738642 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:58:25.739987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:58:25.745516 systemd-resolved[1316]: Defaulting to hostname 'linux'. Feb 13 19:58:25.760889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:25.761838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:58:25.762899 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:58:25.764290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:58:25.765249 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:58:25.769960 systemd-networkd[1378]: lo: Link UP Feb 13 19:58:25.769969 systemd-networkd[1378]: lo: Gained carrier Feb 13 19:58:25.770683 systemd-networkd[1378]: Enumeration completed Feb 13 19:58:25.770770 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:58:25.771447 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:25.771460 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:58:25.771637 systemd[1]: Reached target network.target - Network. Feb 13 19:58:25.774717 systemd-networkd[1378]: eth0: Link UP Feb 13 19:58:25.774729 systemd-networkd[1378]: eth0: Gained carrier Feb 13 19:58:25.774742 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:25.783737 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:58:25.786620 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:58:25.789400 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:58:25.790622 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:58:25.791686 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Feb 13 19:58:25.792285 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:58:25.792335 systemd-timesyncd[1380]: Initial clock synchronization to Thu 2025-02-13 19:58:26.055856 UTC. Feb 13 19:58:25.812422 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:58:25.813433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:25.845065 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:58:25.846175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:58:25.847021 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:58:25.847862 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:58:25.848769 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:58:25.849804 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:58:25.850668 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:58:25.851534 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:58:25.852561 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:58:25.852602 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:58:25.853230 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:58:25.854568 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:58:25.856571 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:58:25.864480 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:58:25.866401 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:58:25.867673 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:58:25.868513 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:58:25.869281 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:58:25.869998 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:58:25.870027 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:58:25.870886 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:58:25.872551 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:58:25.874043 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:58:25.875743 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:58:25.877758 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:58:25.878818 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:58:25.882128 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:58:25.883750 jq[1411]: false Feb 13 19:58:25.884965 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:58:25.886716 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:58:25.891509 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:58:25.898749 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:58:25.901657 extend-filesystems[1412]: Found loop3 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found loop4 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found loop5 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda1 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda2 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda3 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found usr Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda4 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda6 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda7 Feb 13 19:58:25.902454 extend-filesystems[1412]: Found vda9 Feb 13 19:58:25.902454 extend-filesystems[1412]: Checking size of /dev/vda9 Feb 13 19:58:25.902460 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:58:25.905189 dbus-daemon[1410]: [system] SELinux support is enabled Feb 13 19:58:25.902852 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:58:25.909792 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:58:25.915365 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:58:25.916686 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:58:25.917566 extend-filesystems[1412]: Resized partition /dev/vda9 Feb 13 19:58:25.920644 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:58:25.921594 jq[1430]: true Feb 13 19:58:25.922324 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:58:25.924017 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:58:25.924178 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:58:25.924425 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:58:25.924553 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:58:25.928964 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:58:25.929113 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:58:25.931651 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:58:25.936885 update_engine[1424]: I20250213 19:58:25.936630 1424 main.cc:92] Flatcar Update Engine starting Feb 13 19:58:25.944226 update_engine[1424]: I20250213 19:58:25.944176 1424 update_check_scheduler.cc:74] Next update check in 3m34s Feb 13 19:58:25.944491 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:58:25.944535 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:58:25.945628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1347) Feb 13 19:58:25.947735 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:58:25.947764 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:58:25.950641 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:58:25.950834 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:58:25.953412 jq[1436]: true Feb 13 19:58:25.953624 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:58:25.957593 tar[1435]: linux-arm64/helm Feb 13 19:58:25.966606 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:58:25.985084 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:58:25.989927 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:58:25.989927 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:58:25.989927 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:58:25.999504 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Feb 13 19:58:25.991678 systemd-logind[1419]: New seat seat0. Feb 13 19:58:25.993414 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:58:25.993579 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:58:25.999630 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:58:26.034410 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:58:26.039428 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:58:26.041017 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:58:26.042973 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:58:26.162610 containerd[1437]: time="2025-02-13T19:58:26.162503144Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:58:26.187938 containerd[1437]: time="2025-02-13T19:58:26.187852189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:26.189633 containerd[1437]: time="2025-02-13T19:58:26.189385841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:26.189633 containerd[1437]: time="2025-02-13T19:58:26.189420920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:58:26.189633 containerd[1437]: time="2025-02-13T19:58:26.189437199Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:58:26.189633 containerd[1437]: time="2025-02-13T19:58:26.189572806Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:58:26.189633 containerd[1437]: time="2025-02-13T19:58:26.189589912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:26.189826 containerd[1437]: time="2025-02-13T19:58:26.189797412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:26.189856 containerd[1437]: time="2025-02-13T19:58:26.189825756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190132 containerd[1437]: time="2025-02-13T19:58:26.190105481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190159 containerd[1437]: time="2025-02-13T19:58:26.190131801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190159 containerd[1437]: time="2025-02-13T19:58:26.190145560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190159 containerd[1437]: time="2025-02-13T19:58:26.190156426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190308 containerd[1437]: time="2025-02-13T19:58:26.190287116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190656 containerd[1437]: time="2025-02-13T19:58:26.190564568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190826 containerd[1437]: time="2025-02-13T19:58:26.190801239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:26.190861 containerd[1437]: time="2025-02-13T19:58:26.190825617Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:58:26.190986 containerd[1437]: time="2025-02-13T19:58:26.190907964Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:58:26.191049 containerd[1437]: time="2025-02-13T19:58:26.191032084Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:58:26.194814 containerd[1437]: time="2025-02-13T19:58:26.194738291Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:58:26.194814 containerd[1437]: time="2025-02-13T19:58:26.194798533Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:58:26.194869 containerd[1437]: time="2025-02-13T19:58:26.194815928Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:58:26.194869 containerd[1437]: time="2025-02-13T19:58:26.194832125Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:58:26.194869 containerd[1437]: time="2025-02-13T19:58:26.194845388Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:58:26.194993 containerd[1437]: time="2025-02-13T19:58:26.194971492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195318028Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195460700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195479087Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195493507Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195508877Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195523091Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195536106Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195550485Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195565194Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195579821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195642 containerd[1437]: time="2025-02-13T19:58:26.195592175Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195673613Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195695677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195709601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195721873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195735095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195747325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195760877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195774843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195787734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195800295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195815045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195828019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195840043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195852149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.195907 containerd[1437]: time="2025-02-13T19:58:26.195872436Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.195894376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.195906648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.195919250Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196036181Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196053452Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196064938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196077251Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196086589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196098282Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196118445Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:58:26.196184 containerd[1437]: time="2025-02-13T19:58:26.196132328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:58:26.196530 containerd[1437]: time="2025-02-13T19:58:26.196466221Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:58:26.196530 containerd[1437]: time="2025-02-13T19:58:26.196529025Z" level=info msg="Connect containerd service" Feb 13 19:58:26.196704 containerd[1437]: time="2025-02-13T19:58:26.196647113Z" level=info msg="using legacy CRI server" Feb 13 19:58:26.196704 containerd[1437]: time="2025-02-13T19:58:26.196656244Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:58:26.196742 containerd[1437]: time="2025-02-13T19:58:26.196728179Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:58:26.198692 containerd[1437]: time="2025-02-13T19:58:26.198569899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:58:26.199090 containerd[1437]: time="2025-02-13T19:58:26.199057538Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199111499Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199070801Z" level=info msg="Start subscribing containerd event" Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199169510Z" level=info msg="Start recovering state" Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199235950Z" level=info msg="Start event monitor" Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199247891Z" level=info msg="Start snapshots syncer" Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199256733Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199264749Z" level=info msg="Start streaming server" Feb 13 19:58:26.200771 containerd[1437]: time="2025-02-13T19:58:26.199384035Z" level=info msg="containerd successfully booted in 0.038960s" Feb 13 19:58:26.199652 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:58:26.314016 tar[1435]: linux-arm64/LICENSE Feb 13 19:58:26.314016 tar[1435]: linux-arm64/README.md Feb 13 19:58:26.326929 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:58:26.333329 sshd_keygen[1429]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:58:26.352324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:58:26.365939 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:58:26.371263 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:58:26.371484 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:58:26.374885 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:58:26.384589 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:58:26.389526 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:58:26.391909 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:58:26.392931 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:58:27.075718 systemd-networkd[1378]: eth0: Gained IPv6LL Feb 13 19:58:27.083349 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:58:27.084814 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:58:27.095869 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:58:27.097970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:27.099762 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:58:27.113477 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:58:27.113701 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:58:27.115219 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:58:27.123363 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:58:27.598976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:27.600213 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:58:27.601772 systemd[1]: Startup finished in 565ms (kernel) + 4.618s (initrd) + 3.313s (userspace) = 8.497s. Feb 13 19:58:27.602746 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:58:28.026767 kubelet[1524]: E0213 19:58:28.026616 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:58:28.028985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:58:28.029133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:58:32.293585 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:58:32.294759 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:38038.service - OpenSSH per-connection server daemon (10.0.0.1:38038). Feb 13 19:58:32.348364 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 38038 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:32.349935 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:32.360493 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:58:32.370833 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:58:32.372201 systemd-logind[1419]: New session 1 of user core. Feb 13 19:58:32.378911 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:58:32.380914 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:58:32.388108 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:58:32.460513 systemd[1541]: Queued start job for default target default.target. Feb 13 19:58:32.472529 systemd[1541]: Created slice app.slice - User Application Slice. Feb 13 19:58:32.472570 systemd[1541]: Reached target paths.target - Paths. Feb 13 19:58:32.472581 systemd[1541]: Reached target timers.target - Timers. Feb 13 19:58:32.473722 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:58:32.482843 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:58:32.482893 systemd[1541]: Reached target sockets.target - Sockets. Feb 13 19:58:32.482905 systemd[1541]: Reached target basic.target - Basic System. Feb 13 19:58:32.482941 systemd[1541]: Reached target default.target - Main User Target. Feb 13 19:58:32.482965 systemd[1541]: Startup finished in 89ms. Feb 13 19:58:32.483190 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:58:32.484395 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:58:32.546721 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:52362.service - OpenSSH per-connection server daemon (10.0.0.1:52362). Feb 13 19:58:32.580611 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 52362 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:32.581844 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:32.585957 systemd-logind[1419]: New session 2 of user core. Feb 13 19:58:32.597740 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:58:32.648928 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:32.663865 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:52362.service: Deactivated successfully. Feb 13 19:58:32.665186 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:58:32.667678 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:58:32.668839 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:52372.service - OpenSSH per-connection server daemon (10.0.0.1:52372). Feb 13 19:58:32.669982 systemd-logind[1419]: Removed session 2. Feb 13 19:58:32.702749 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 52372 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:32.703902 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:32.707656 systemd-logind[1419]: New session 3 of user core. Feb 13 19:58:32.720746 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:58:32.769652 sshd[1559]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:32.780815 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:52372.service: Deactivated successfully. Feb 13 19:58:32.782786 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:58:32.784144 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:58:32.793011 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:52384.service - OpenSSH per-connection server daemon (10.0.0.1:52384). Feb 13 19:58:32.793719 systemd-logind[1419]: Removed session 3. Feb 13 19:58:32.823818 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 52384 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:32.825552 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:32.829145 systemd-logind[1419]: New session 4 of user core. Feb 13 19:58:32.833716 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:58:32.885567 sshd[1566]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:32.896647 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:52384.service: Deactivated successfully. Feb 13 19:58:32.897904 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:58:32.898959 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:58:32.900009 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:52390.service - OpenSSH per-connection server daemon (10.0.0.1:52390). Feb 13 19:58:32.900553 systemd-logind[1419]: Removed session 4. Feb 13 19:58:32.934299 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 52390 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:32.935394 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:32.938491 systemd-logind[1419]: New session 5 of user core. Feb 13 19:58:32.948712 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:58:33.007537 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:58:33.007878 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:33.022323 sudo[1576]: pam_unix(sudo:session): session closed for user root Feb 13 19:58:33.023962 sshd[1573]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:33.031992 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:52390.service: Deactivated successfully. Feb 13 19:58:33.034881 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:58:33.036896 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:58:33.037254 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:52396.service - OpenSSH per-connection server daemon (10.0.0.1:52396). Feb 13 19:58:33.038419 systemd-logind[1419]: Removed session 5. Feb 13 19:58:33.071955 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 52396 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:33.073101 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:33.076606 systemd-logind[1419]: New session 6 of user core. Feb 13 19:58:33.082724 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:58:33.133335 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:58:33.133637 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:33.136309 sudo[1585]: pam_unix(sudo:session): session closed for user root Feb 13 19:58:33.140463 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:58:33.140726 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:33.158806 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:58:33.159841 auditctl[1588]: No rules Feb 13 19:58:33.160592 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:58:33.162647 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:58:33.164054 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:58:33.185588 augenrules[1606]: No rules Feb 13 19:58:33.188673 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:58:33.189711 sudo[1584]: pam_unix(sudo:session): session closed for user root Feb 13 19:58:33.190984 sshd[1581]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:33.203793 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:52396.service: Deactivated successfully. Feb 13 19:58:33.204995 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:58:33.205498 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:58:33.207055 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:52402.service - OpenSSH per-connection server daemon (10.0.0.1:52402). Feb 13 19:58:33.207631 systemd-logind[1419]: Removed session 6. Feb 13 19:58:33.240959 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 52402 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:33.242080 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:33.245685 systemd-logind[1419]: New session 7 of user core. Feb 13 19:58:33.256724 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:58:33.307391 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:58:33.307679 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:33.607995 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:58:33.608093 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:58:33.860682 dockerd[1635]: time="2025-02-13T19:58:33.859921398Z" level=info msg="Starting up" Feb 13 19:58:33.997984 dockerd[1635]: time="2025-02-13T19:58:33.997941300Z" level=info msg="Loading containers: start." Feb 13 19:58:34.082632 kernel: Initializing XFRM netlink socket Feb 13 19:58:34.145568 systemd-networkd[1378]: docker0: Link UP Feb 13 19:58:34.165883 dockerd[1635]: time="2025-02-13T19:58:34.165795057Z" level=info msg="Loading containers: done." Feb 13 19:58:34.176194 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1266302903-merged.mount: Deactivated successfully. Feb 13 19:58:34.177967 dockerd[1635]: time="2025-02-13T19:58:34.177924596Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:58:34.178037 dockerd[1635]: time="2025-02-13T19:58:34.178012985Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:58:34.178152 dockerd[1635]: time="2025-02-13T19:58:34.178122856Z" level=info msg="Daemon has completed initialization" Feb 13 19:58:34.207740 dockerd[1635]: time="2025-02-13T19:58:34.207621726Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:58:34.207801 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:58:34.934135 containerd[1437]: time="2025-02-13T19:58:34.934097034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:58:35.669248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393348514.mount: Deactivated successfully. Feb 13 19:58:37.505886 containerd[1437]: time="2025-02-13T19:58:37.505826209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:37.506244 containerd[1437]: time="2025-02-13T19:58:37.506202602Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 19:58:37.507191 containerd[1437]: time="2025-02-13T19:58:37.507158883Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:37.509951 containerd[1437]: time="2025-02-13T19:58:37.509920072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:37.511065 containerd[1437]: time="2025-02-13T19:58:37.511029989Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.576890359s" Feb 13 19:58:37.511102 containerd[1437]: time="2025-02-13T19:58:37.511070695Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:58:37.511744 containerd[1437]: time="2025-02-13T19:58:37.511715913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:58:38.279543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:58:38.289757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:38.380226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:38.383748 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:58:38.414761 kubelet[1845]: E0213 19:58:38.414713 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:58:38.417788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:58:38.418049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:58:39.409527 containerd[1437]: time="2025-02-13T19:58:39.409477407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:39.410428 containerd[1437]: time="2025-02-13T19:58:39.410167311Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 19:58:39.411170 containerd[1437]: time="2025-02-13T19:58:39.411132524Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:39.414355 containerd[1437]: time="2025-02-13T19:58:39.414325805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:39.415950 containerd[1437]: time="2025-02-13T19:58:39.415921057Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.904171308s" Feb 13 19:58:39.416057 containerd[1437]: time="2025-02-13T19:58:39.416040346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:58:39.416514 containerd[1437]: time="2025-02-13T19:58:39.416475137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:58:41.095515 containerd[1437]: time="2025-02-13T19:58:41.095459283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:41.095926 containerd[1437]: time="2025-02-13T19:58:41.095896418Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 19:58:41.096946 containerd[1437]: time="2025-02-13T19:58:41.096917579Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:41.099659 containerd[1437]: time="2025-02-13T19:58:41.099611948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:41.100874 containerd[1437]: time="2025-02-13T19:58:41.100842918Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.684236798s" Feb 13 19:58:41.100925 containerd[1437]: time="2025-02-13T19:58:41.100876868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:58:41.101852 containerd[1437]: time="2025-02-13T19:58:41.101827396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:58:42.078757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397651133.mount: Deactivated successfully. Feb 13 19:58:42.596868 containerd[1437]: time="2025-02-13T19:58:42.596694575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:42.597578 containerd[1437]: time="2025-02-13T19:58:42.597365014Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 19:58:42.598365 containerd[1437]: time="2025-02-13T19:58:42.598297468Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:42.600061 containerd[1437]: time="2025-02-13T19:58:42.600031549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:42.600872 containerd[1437]: time="2025-02-13T19:58:42.600748127Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.498887669s" Feb 13 19:58:42.600872 containerd[1437]: time="2025-02-13T19:58:42.600781295Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:58:42.601349 containerd[1437]: time="2025-02-13T19:58:42.601199476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:58:43.381307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197202896.mount: Deactivated successfully. Feb 13 19:58:44.250276 containerd[1437]: time="2025-02-13T19:58:44.250217696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:44.251024 containerd[1437]: time="2025-02-13T19:58:44.250977553Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:58:44.251830 containerd[1437]: time="2025-02-13T19:58:44.251797869Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:44.254967 containerd[1437]: time="2025-02-13T19:58:44.254916974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:44.257811 containerd[1437]: time="2025-02-13T19:58:44.257720421Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.656494573s" Feb 13 19:58:44.257811 containerd[1437]: time="2025-02-13T19:58:44.257769045Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:58:44.258230 containerd[1437]: time="2025-02-13T19:58:44.258156717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:58:44.932649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185332819.mount: Deactivated successfully. Feb 13 19:58:44.937278 containerd[1437]: time="2025-02-13T19:58:44.937239576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:44.937918 containerd[1437]: time="2025-02-13T19:58:44.937803972Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:58:44.938874 containerd[1437]: time="2025-02-13T19:58:44.938840972Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:44.941174 containerd[1437]: time="2025-02-13T19:58:44.941116130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:44.941960 containerd[1437]: time="2025-02-13T19:58:44.941883650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 683.689061ms" Feb 13 19:58:44.941960 containerd[1437]: time="2025-02-13T19:58:44.941915304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:58:44.942581 containerd[1437]: time="2025-02-13T19:58:44.942397937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:58:45.621671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288229963.mount: Deactivated successfully. Feb 13 19:58:48.322182 containerd[1437]: time="2025-02-13T19:58:48.322136686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:48.323140 containerd[1437]: time="2025-02-13T19:58:48.322882626Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 19:58:48.323881 containerd[1437]: time="2025-02-13T19:58:48.323812407Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:48.326968 containerd[1437]: time="2025-02-13T19:58:48.326918181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:58:48.328299 containerd[1437]: time="2025-02-13T19:58:48.328265490Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.385831538s" Feb 13 19:58:48.328348 containerd[1437]: time="2025-02-13T19:58:48.328302955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:58:48.666631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:58:48.676829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:48.771578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:48.774948 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:58:48.808138 kubelet[2003]: E0213 19:58:48.808055 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:58:48.810566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:58:48.810738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:58:54.555238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:54.565775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:54.583819 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit session-7.scope)... Feb 13 19:58:54.583833 systemd[1]: Reloading... Feb 13 19:58:54.651608 zram_generator::config[2064]: No configuration found. Feb 13 19:58:54.752015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:58:54.804020 systemd[1]: Reloading finished in 219 ms. Feb 13 19:58:54.843117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:54.846244 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:58:54.846422 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:54.847789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:54.937974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:54.941780 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:58:54.977185 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:58:54.977185 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:58:54.977185 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:58:54.977478 kubelet[2105]: I0213 19:58:54.977363 2105 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:58:55.630203 kubelet[2105]: I0213 19:58:55.630153 2105 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:58:55.630203 kubelet[2105]: I0213 19:58:55.630187 2105 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:58:55.630437 kubelet[2105]: I0213 19:58:55.630414 2105 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:58:55.662175 kubelet[2105]: E0213 19:58:55.662135 2105 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:55.664303 kubelet[2105]: I0213 19:58:55.664276 2105 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:58:55.672729 kubelet[2105]: E0213 19:58:55.672704 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:58:55.672729 kubelet[2105]: I0213 19:58:55.672730 2105 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:58:55.676016 kubelet[2105]: I0213 19:58:55.675985 2105 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:58:55.676827 kubelet[2105]: I0213 19:58:55.676804 2105 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:58:55.676963 kubelet[2105]: I0213 19:58:55.676932 2105 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:58:55.677118 kubelet[2105]: I0213 19:58:55.676959 2105 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:58:55.677270 kubelet[2105]: I0213 19:58:55.677251 2105 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:58:55.677270 kubelet[2105]: I0213 19:58:55.677264 2105 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:58:55.677452 kubelet[2105]: I0213 19:58:55.677435 2105 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:58:55.679024 kubelet[2105]: I0213 19:58:55.678995 2105 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:58:55.679024 kubelet[2105]: I0213 19:58:55.679020 2105 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:58:55.679118 kubelet[2105]: I0213 19:58:55.679101 2105 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:58:55.679118 kubelet[2105]: I0213 19:58:55.679115 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:58:55.680885 kubelet[2105]: I0213 19:58:55.680678 2105 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:58:55.682679 kubelet[2105]: I0213 19:58:55.682606 2105 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:58:55.683149 kubelet[2105]: W0213 19:58:55.683026 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:55.683149 kubelet[2105]: E0213 19:58:55.683078 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:55.686871 kubelet[2105]: W0213 19:58:55.683997 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:55.686871 kubelet[2105]: E0213 19:58:55.684041 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:55.687192 kubelet[2105]: W0213 19:58:55.687162 2105 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:58:55.687949 kubelet[2105]: I0213 19:58:55.687924 2105 server.go:1269] "Started kubelet" Feb 13 19:58:55.689011 kubelet[2105]: I0213 19:58:55.688976 2105 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:58:55.689808 kubelet[2105]: I0213 19:58:55.689779 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:58:55.689856 kubelet[2105]: I0213 19:58:55.689832 2105 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:58:55.690795 kubelet[2105]: I0213 19:58:55.690747 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:58:55.690958 kubelet[2105]: I0213 19:58:55.690939 2105 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:58:55.695840 kubelet[2105]: I0213 19:58:55.693673 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:58:55.698987 kubelet[2105]: I0213 19:58:55.698963 2105 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:58:55.699248 kubelet[2105]: E0213 19:58:55.699221 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:58:55.699322 kubelet[2105]: W0213 19:58:55.699281 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:55.699355 kubelet[2105]: E0213 19:58:55.699332 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:55.699814 kubelet[2105]: E0213 19:58:55.699782 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Feb 13 19:58:55.700022 kubelet[2105]: I0213 19:58:55.700001 2105 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:58:55.700107 kubelet[2105]: I0213 19:58:55.700088 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:58:55.700438 kubelet[2105]: E0213 19:58:55.699318 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dcdffdbbc5fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:58:55.687902714 +0000 UTC m=+0.743223777,LastTimestamp:2025-02-13 19:58:55.687902714 +0000 UTC m=+0.743223777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:58:55.701194 kubelet[2105]: E0213 19:58:55.701165 2105 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:58:55.701404 kubelet[2105]: I0213 19:58:55.701383 2105 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:58:55.701540 kubelet[2105]: I0213 19:58:55.701523 2105 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:58:55.701697 kubelet[2105]: I0213 19:58:55.701675 2105 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:58:55.706874 kubelet[2105]: I0213 19:58:55.706837 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:58:55.708159 kubelet[2105]: I0213 19:58:55.708132 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:58:55.708268 kubelet[2105]: I0213 19:58:55.708257 2105 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:58:55.708333 kubelet[2105]: I0213 19:58:55.708325 2105 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:58:55.708573 kubelet[2105]: E0213 19:58:55.708546 2105 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:58:55.710128 kubelet[2105]: W0213 19:58:55.710089 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:55.710407 kubelet[2105]: E0213 19:58:55.710386 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:55.715200 kubelet[2105]: I0213 19:58:55.715177 2105 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:58:55.715329 kubelet[2105]: I0213 19:58:55.715309 2105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:58:55.715442 kubelet[2105]: I0213 19:58:55.715432 2105 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:58:55.781778 kubelet[2105]: I0213 19:58:55.781748 2105 policy_none.go:49] "None policy: Start" Feb 13 19:58:55.782682 kubelet[2105]: I0213 19:58:55.782633 2105 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:58:55.782758 kubelet[2105]: I0213 19:58:55.782694 2105 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:58:55.788034 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:58:55.799286 kubelet[2105]: E0213 19:58:55.799261 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:58:55.802050 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:58:55.804407 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:58:55.809239 kubelet[2105]: E0213 19:58:55.809208 2105 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:58:55.812274 kubelet[2105]: I0213 19:58:55.812254 2105 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:58:55.812877 kubelet[2105]: I0213 19:58:55.812505 2105 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:58:55.812877 kubelet[2105]: I0213 19:58:55.812522 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:58:55.812877 kubelet[2105]: I0213 19:58:55.812738 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:58:55.813969 kubelet[2105]: E0213 19:58:55.813921 2105 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:58:55.901144 kubelet[2105]: E0213 19:58:55.901048 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Feb 13 19:58:55.914144 kubelet[2105]: I0213 19:58:55.914118 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:58:55.914539 kubelet[2105]: E0213 19:58:55.914505 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:58:56.017386 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 19:58:56.032895 systemd[1]: Created slice kubepods-burstable-podf4757c7ce14a8b75c8414533e3ce81b1.slice - libcontainer container kubepods-burstable-podf4757c7ce14a8b75c8414533e3ce81b1.slice. Feb 13 19:58:56.044717 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 19:58:56.102745 kubelet[2105]: I0213 19:58:56.102683 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4757c7ce14a8b75c8414533e3ce81b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f4757c7ce14a8b75c8414533e3ce81b1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:58:56.102745 kubelet[2105]: I0213 19:58:56.102743 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:58:56.103099 kubelet[2105]: I0213 19:58:56.102766 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:58:56.103099 kubelet[2105]: I0213 19:58:56.102784 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:58:56.103099 kubelet[2105]: I0213 19:58:56.102800 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4757c7ce14a8b75c8414533e3ce81b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4757c7ce14a8b75c8414533e3ce81b1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:58:56.103099 kubelet[2105]: I0213 19:58:56.102816 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4757c7ce14a8b75c8414533e3ce81b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4757c7ce14a8b75c8414533e3ce81b1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:58:56.103099 kubelet[2105]: I0213 19:58:56.102832 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:58:56.103201 kubelet[2105]: I0213 19:58:56.102845 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:58:56.103201 kubelet[2105]: I0213 19:58:56.102860 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:58:56.115835 kubelet[2105]: I0213 19:58:56.115803 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:58:56.116190 kubelet[2105]: E0213 19:58:56.116144 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:58:56.301981 kubelet[2105]: E0213 19:58:56.301859 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Feb 13 19:58:56.331304 kubelet[2105]: E0213 19:58:56.331247 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:56.331961 containerd[1437]: time="2025-02-13T19:58:56.331809688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 19:58:56.343215 kubelet[2105]: E0213 19:58:56.342983 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:56.344242 containerd[1437]: time="2025-02-13T19:58:56.344125874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f4757c7ce14a8b75c8414533e3ce81b1,Namespace:kube-system,Attempt:0,}" Feb 13 19:58:56.346372 kubelet[2105]: E0213 19:58:56.346346 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:56.346734 containerd[1437]: time="2025-02-13T19:58:56.346679925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 19:58:56.517794 kubelet[2105]: I0213 19:58:56.517737 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:58:56.518127 kubelet[2105]: E0213 19:58:56.518067 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:58:56.882430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885786402.mount: Deactivated successfully. Feb 13 19:58:56.885802 containerd[1437]: time="2025-02-13T19:58:56.885755041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:56.887680 containerd[1437]: time="2025-02-13T19:58:56.887644854Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:58:56.888415 containerd[1437]: time="2025-02-13T19:58:56.888351478Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:56.890533 containerd[1437]: time="2025-02-13T19:58:56.890502208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:56.891188 containerd[1437]: time="2025-02-13T19:58:56.891134627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:58:56.891893 containerd[1437]: time="2025-02-13T19:58:56.891470068Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:56.892251 containerd[1437]: time="2025-02-13T19:58:56.892182816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:58:56.893997 containerd[1437]: time="2025-02-13T19:58:56.893962643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:56.897097 containerd[1437]: time="2025-02-13T19:58:56.897064944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.182653ms" Feb 13 19:58:56.898997 containerd[1437]: time="2025-02-13T19:58:56.898796582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.579013ms" Feb 13 19:58:56.899615 containerd[1437]: time="2025-02-13T19:58:56.899309810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.570809ms" Feb 13 19:58:57.012802 kubelet[2105]: W0213 19:58:57.012708 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:57.012802 kubelet[2105]: E0213 19:58:57.012759 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:57.028745 containerd[1437]: time="2025-02-13T19:58:57.028515720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:57.028745 containerd[1437]: time="2025-02-13T19:58:57.028561144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:57.028745 containerd[1437]: time="2025-02-13T19:58:57.028571830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:57.028745 containerd[1437]: time="2025-02-13T19:58:57.028658235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:57.029793 containerd[1437]: time="2025-02-13T19:58:57.029671647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:57.029793 containerd[1437]: time="2025-02-13T19:58:57.029720913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:57.029793 containerd[1437]: time="2025-02-13T19:58:57.029736441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:57.029901 containerd[1437]: time="2025-02-13T19:58:57.029808959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:57.031589 containerd[1437]: time="2025-02-13T19:58:57.031414521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:57.031589 containerd[1437]: time="2025-02-13T19:58:57.031463947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:57.031589 containerd[1437]: time="2025-02-13T19:58:57.031479075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:57.031850 containerd[1437]: time="2025-02-13T19:58:57.031787877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:57.052849 systemd[1]: Started cri-containerd-775473523a65b06b60cc94d1df8d9f1c230e5259c911f5458895cafae5cc5397.scope - libcontainer container 775473523a65b06b60cc94d1df8d9f1c230e5259c911f5458895cafae5cc5397. Feb 13 19:58:57.054166 systemd[1]: Started cri-containerd-7f1d3ca1477491b9bbbd3b69e195b72bc3198e3bcecdf98d012f04b14b3ac918.scope - libcontainer container 7f1d3ca1477491b9bbbd3b69e195b72bc3198e3bcecdf98d012f04b14b3ac918. Feb 13 19:58:57.057497 systemd[1]: Started cri-containerd-dc34d6f72cd4688ffca2c6cbf83ee011af8124cf7399babbad459e2cd1624add.scope - libcontainer container dc34d6f72cd4688ffca2c6cbf83ee011af8124cf7399babbad459e2cd1624add. Feb 13 19:58:57.085839 containerd[1437]: time="2025-02-13T19:58:57.085754837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f4757c7ce14a8b75c8414533e3ce81b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"775473523a65b06b60cc94d1df8d9f1c230e5259c911f5458895cafae5cc5397\"" Feb 13 19:58:57.087271 kubelet[2105]: E0213 19:58:57.087077 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:57.089127 containerd[1437]: time="2025-02-13T19:58:57.089090628Z" level=info msg="CreateContainer within sandbox \"775473523a65b06b60cc94d1df8d9f1c230e5259c911f5458895cafae5cc5397\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:58:57.089302 containerd[1437]: time="2025-02-13T19:58:57.089192922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f1d3ca1477491b9bbbd3b69e195b72bc3198e3bcecdf98d012f04b14b3ac918\"" Feb 13 19:58:57.089796 kubelet[2105]: E0213 19:58:57.089775 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:57.091289 containerd[1437]: time="2025-02-13T19:58:57.091251322Z" level=info msg="CreateContainer within sandbox \"7f1d3ca1477491b9bbbd3b69e195b72bc3198e3bcecdf98d012f04b14b3ac918\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:58:57.093027 containerd[1437]: time="2025-02-13T19:58:57.093000760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc34d6f72cd4688ffca2c6cbf83ee011af8124cf7399babbad459e2cd1624add\"" Feb 13 19:58:57.093742 kubelet[2105]: E0213 19:58:57.093721 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:57.095378 containerd[1437]: time="2025-02-13T19:58:57.095344069Z" level=info msg="CreateContainer within sandbox \"dc34d6f72cd4688ffca2c6cbf83ee011af8124cf7399babbad459e2cd1624add\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:58:57.103170 kubelet[2105]: E0213 19:58:57.103134 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Feb 13 19:58:57.106577 containerd[1437]: time="2025-02-13T19:58:57.106492400Z" level=info msg="CreateContainer within sandbox \"775473523a65b06b60cc94d1df8d9f1c230e5259c911f5458895cafae5cc5397\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"94f96c80cb609191c8fc3217279a3882e5f43440a698201f500d3cac49eceb1b\"" Feb 13 19:58:57.107108 containerd[1437]: time="2025-02-13T19:58:57.107077026Z" level=info msg="StartContainer for \"94f96c80cb609191c8fc3217279a3882e5f43440a698201f500d3cac49eceb1b\"" Feb 13 19:58:57.107897 containerd[1437]: time="2025-02-13T19:58:57.107868122Z" level=info msg="CreateContainer within sandbox \"7f1d3ca1477491b9bbbd3b69e195b72bc3198e3bcecdf98d012f04b14b3ac918\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef29286b1d38a8ede7d0bed444741f414c99d69c3e66501ef9f29ab59aae1b71\"" Feb 13 19:58:57.109243 containerd[1437]: time="2025-02-13T19:58:57.108243519Z" level=info msg="StartContainer for \"ef29286b1d38a8ede7d0bed444741f414c99d69c3e66501ef9f29ab59aae1b71\"" Feb 13 19:58:57.111355 containerd[1437]: time="2025-02-13T19:58:57.111321054Z" level=info msg="CreateContainer within sandbox \"dc34d6f72cd4688ffca2c6cbf83ee011af8124cf7399babbad459e2cd1624add\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee47ade3a61c35c56d89e54da2282acf72ca67d87b3638d72060fc28dc3844c6\"" Feb 13 19:58:57.111922 containerd[1437]: time="2025-02-13T19:58:57.111896315Z" level=info msg="StartContainer for \"ee47ade3a61c35c56d89e54da2282acf72ca67d87b3638d72060fc28dc3844c6\"" Feb 13 19:58:57.137819 systemd[1]: Started cri-containerd-94f96c80cb609191c8fc3217279a3882e5f43440a698201f500d3cac49eceb1b.scope - libcontainer container 94f96c80cb609191c8fc3217279a3882e5f43440a698201f500d3cac49eceb1b. Feb 13 19:58:57.138960 systemd[1]: Started cri-containerd-ef29286b1d38a8ede7d0bed444741f414c99d69c3e66501ef9f29ab59aae1b71.scope - libcontainer container ef29286b1d38a8ede7d0bed444741f414c99d69c3e66501ef9f29ab59aae1b71. Feb 13 19:58:57.142145 systemd[1]: Started cri-containerd-ee47ade3a61c35c56d89e54da2282acf72ca67d87b3638d72060fc28dc3844c6.scope - libcontainer container ee47ade3a61c35c56d89e54da2282acf72ca67d87b3638d72060fc28dc3844c6. Feb 13 19:58:57.144688 kubelet[2105]: W0213 19:58:57.144477 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:57.144688 kubelet[2105]: E0213 19:58:57.144648 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:57.150573 kubelet[2105]: W0213 19:58:57.150529 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:57.150573 kubelet[2105]: E0213 19:58:57.150590 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:57.218320 containerd[1437]: time="2025-02-13T19:58:57.218122739Z" level=info msg="StartContainer for \"ef29286b1d38a8ede7d0bed444741f414c99d69c3e66501ef9f29ab59aae1b71\" returns successfully" Feb 13 19:58:57.218320 containerd[1437]: time="2025-02-13T19:58:57.218143470Z" level=info msg="StartContainer for \"94f96c80cb609191c8fc3217279a3882e5f43440a698201f500d3cac49eceb1b\" returns successfully" Feb 13 19:58:57.218320 containerd[1437]: time="2025-02-13T19:58:57.218148593Z" level=info msg="StartContainer for \"ee47ade3a61c35c56d89e54da2282acf72ca67d87b3638d72060fc28dc3844c6\" returns successfully" Feb 13 19:58:57.299812 kubelet[2105]: W0213 19:58:57.299709 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:58:57.299812 kubelet[2105]: E0213 19:58:57.299781 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:58:57.319697 kubelet[2105]: I0213 19:58:57.319430 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:58:57.319848 kubelet[2105]: E0213 19:58:57.319814 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:58:57.331264 kubelet[2105]: E0213 19:58:57.331174 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dcdffdbbc5fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:58:55.687902714 +0000 UTC m=+0.743223777,LastTimestamp:2025-02-13 19:58:55.687902714 +0000 UTC m=+0.743223777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:58:57.719605 kubelet[2105]: E0213 19:58:57.718271 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:57.721677 kubelet[2105]: E0213 19:58:57.721608 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:57.725047 kubelet[2105]: E0213 19:58:57.725026 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:58.707247 kubelet[2105]: E0213 19:58:58.707189 2105 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:58:58.718496 kubelet[2105]: E0213 19:58:58.718462 2105 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 19:58:58.726101 kubelet[2105]: E0213 19:58:58.726064 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:58.726571 kubelet[2105]: E0213 19:58:58.726539 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:58.921370 kubelet[2105]: I0213 19:58:58.921325 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:58:58.926645 kubelet[2105]: I0213 19:58:58.926608 2105 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:58:59.682804 kubelet[2105]: I0213 19:58:59.681765 2105 apiserver.go:52] "Watching apiserver" Feb 13 19:58:59.699989 kubelet[2105]: I0213 19:58:59.699966 2105 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:59:00.388566 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-7.scope)... Feb 13 19:59:00.388580 systemd[1]: Reloading... Feb 13 19:59:00.459625 zram_generator::config[2426]: No configuration found. Feb 13 19:59:00.597400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:59:00.658369 kubelet[2105]: E0213 19:59:00.657987 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:00.661220 systemd[1]: Reloading finished in 272 ms. Feb 13 19:59:00.694943 kubelet[2105]: I0213 19:59:00.694907 2105 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:59:00.695051 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:59:00.711938 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:59:00.712130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:59:00.712173 systemd[1]: kubelet.service: Consumed 1.075s CPU time, 118.6M memory peak, 0B memory swap peak. Feb 13 19:59:00.721877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:59:00.812181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:59:00.816292 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:59:00.852078 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:59:00.852078 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:59:00.852078 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:59:00.852405 kubelet[2465]: I0213 19:59:00.852132 2465 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:59:00.859055 kubelet[2465]: I0213 19:59:00.858841 2465 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:59:00.859055 kubelet[2465]: I0213 19:59:00.858866 2465 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:59:00.859172 kubelet[2465]: I0213 19:59:00.859113 2465 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:59:00.860422 kubelet[2465]: I0213 19:59:00.860405 2465 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:59:00.862569 kubelet[2465]: I0213 19:59:00.862422 2465 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:59:00.865449 kubelet[2465]: E0213 19:59:00.865398 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:59:00.865449 kubelet[2465]: I0213 19:59:00.865450 2465 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:59:00.867410 kubelet[2465]: I0213 19:59:00.867381 2465 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:59:00.867525 kubelet[2465]: I0213 19:59:00.867502 2465 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:59:00.867659 kubelet[2465]: I0213 19:59:00.867627 2465 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:59:00.867799 kubelet[2465]: I0213 19:59:00.867653 2465 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:59:00.867875 kubelet[2465]: I0213 19:59:00.867805 2465 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:59:00.867875 kubelet[2465]: I0213 19:59:00.867814 2465 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:59:00.867875 kubelet[2465]: I0213 19:59:00.867841 2465 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:59:00.867974 kubelet[2465]: I0213 19:59:00.867959 2465 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:59:00.867974 kubelet[2465]: I0213 19:59:00.867972 2465 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:59:00.868043 kubelet[2465]: I0213 19:59:00.867990 2465 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:59:00.868043 kubelet[2465]: I0213 19:59:00.867999 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:59:00.868740 kubelet[2465]: I0213 19:59:00.868388 2465 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:59:00.868903 kubelet[2465]: I0213 19:59:00.868869 2465 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:59:00.869244 kubelet[2465]: I0213 19:59:00.869220 2465 server.go:1269] "Started kubelet" Feb 13 19:59:00.872589 kubelet[2465]: I0213 19:59:00.869844 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:59:00.872589 kubelet[2465]: I0213 19:59:00.870077 2465 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:59:00.872589 kubelet[2465]: I0213 19:59:00.870132 2465 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:59:00.872589 kubelet[2465]: I0213 19:59:00.870965 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:59:00.872589 kubelet[2465]: I0213 19:59:00.871349 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:59:00.876110 kubelet[2465]: I0213 19:59:00.876086 2465 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:59:00.876266 kubelet[2465]: E0213 19:59:00.876243 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:59:00.876498 kubelet[2465]: I0213 19:59:00.876473 2465 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:59:00.876800 kubelet[2465]: I0213 19:59:00.876779 2465 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:59:00.877042 kubelet[2465]: I0213 19:59:00.877026 2465 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:59:00.881623 kubelet[2465]: I0213 19:59:00.880910 2465 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:59:00.881623 kubelet[2465]: I0213 19:59:00.881330 2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:59:00.888245 kubelet[2465]: I0213 19:59:00.888223 2465 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:59:00.893023 kubelet[2465]: I0213 19:59:00.892982 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:59:00.894502 kubelet[2465]: I0213 19:59:00.894425 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:59:00.894502 kubelet[2465]: I0213 19:59:00.894447 2465 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:59:00.894502 kubelet[2465]: I0213 19:59:00.894461 2465 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:59:00.894647 kubelet[2465]: E0213 19:59:00.894523 2465 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:59:00.923912 kubelet[2465]: I0213 19:59:00.923823 2465 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:59:00.923912 kubelet[2465]: I0213 19:59:00.923846 2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:59:00.923912 kubelet[2465]: I0213 19:59:00.923867 2465 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:59:00.924055 kubelet[2465]: I0213 19:59:00.924001 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:59:00.924055 kubelet[2465]: I0213 19:59:00.924012 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:59:00.924055 kubelet[2465]: I0213 19:59:00.924028 2465 policy_none.go:49] "None policy: Start" Feb 13 19:59:00.924739 kubelet[2465]: I0213 19:59:00.924666 2465 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:59:00.924739 kubelet[2465]: I0213 19:59:00.924693 2465 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:59:00.924839 kubelet[2465]: I0213 19:59:00.924822 2465 state_mem.go:75] "Updated machine memory state" Feb 13 19:59:00.928641 kubelet[2465]: I0213 19:59:00.928474 2465 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:59:00.928844 kubelet[2465]: I0213 19:59:00.928653 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:59:00.928844 kubelet[2465]: I0213 19:59:00.928666 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:59:00.928844 kubelet[2465]: I0213 19:59:00.928815 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:59:01.000879 kubelet[2465]: E0213 19:59:01.000842 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:59:01.033444 kubelet[2465]: I0213 19:59:01.033411 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:59:01.040131 kubelet[2465]: I0213 19:59:01.040109 2465 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 19:59:01.040759 kubelet[2465]: I0213 19:59:01.040286 2465 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:59:01.077423 kubelet[2465]: I0213 19:59:01.077362 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4757c7ce14a8b75c8414533e3ce81b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4757c7ce14a8b75c8414533e3ce81b1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:59:01.177992 kubelet[2465]: I0213 19:59:01.177884 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4757c7ce14a8b75c8414533e3ce81b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4757c7ce14a8b75c8414533e3ce81b1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:59:01.177992 kubelet[2465]: I0213 19:59:01.177920 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:59:01.177992 kubelet[2465]: I0213 19:59:01.177938 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:59:01.177992 kubelet[2465]: I0213 19:59:01.177960 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4757c7ce14a8b75c8414533e3ce81b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f4757c7ce14a8b75c8414533e3ce81b1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:59:01.177992 kubelet[2465]: I0213 19:59:01.177977 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:59:01.178173 kubelet[2465]: I0213 19:59:01.177992 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:59:01.178173 kubelet[2465]: I0213 19:59:01.178009 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:59:01.178173 kubelet[2465]: I0213 19:59:01.178051 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:59:01.301255 kubelet[2465]: E0213 19:59:01.301197 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:01.301377 kubelet[2465]: E0213 19:59:01.301283 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:01.301587 kubelet[2465]: E0213 19:59:01.301546 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:01.868745 kubelet[2465]: I0213 19:59:01.868697 2465 apiserver.go:52] "Watching apiserver" Feb 13 19:59:01.877469 kubelet[2465]: I0213 19:59:01.877436 2465 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:59:01.916124 kubelet[2465]: E0213 19:59:01.916084 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:01.916409 kubelet[2465]: E0213 19:59:01.916383 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:01.916690 kubelet[2465]: E0213 19:59:01.916671 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:01.967610 kubelet[2465]: I0213 19:59:01.967453 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9673658889999999 podStartE2EDuration="1.967365889s" podCreationTimestamp="2025-02-13 19:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:59:01.957467945 +0000 UTC m=+1.138231776" watchObservedRunningTime="2025-02-13 19:59:01.967365889 +0000 UTC m=+1.148129720" Feb 13 19:59:01.968159 kubelet[2465]: I0213 19:59:01.967950 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.967940845 podStartE2EDuration="1.967940845s" podCreationTimestamp="2025-02-13 19:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:59:01.966617451 +0000 UTC m=+1.147381282" watchObservedRunningTime="2025-02-13 19:59:01.967940845 +0000 UTC m=+1.148704676" Feb 13 19:59:01.989152 kubelet[2465]: I0213 19:59:01.989087 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.989070453 podStartE2EDuration="1.989070453s" podCreationTimestamp="2025-02-13 19:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:59:01.979010591 +0000 UTC m=+1.159774422" watchObservedRunningTime="2025-02-13 19:59:01.989070453 +0000 UTC m=+1.169834244" Feb 13 19:59:02.917859 kubelet[2465]: E0213 19:59:02.917714 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:05.146075 kubelet[2465]: I0213 19:59:05.146033 2465 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:59:05.146436 containerd[1437]: time="2025-02-13T19:59:05.146386455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:59:05.146643 kubelet[2465]: I0213 19:59:05.146556 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:59:05.378142 sudo[1617]: pam_unix(sudo:session): session closed for user root Feb 13 19:59:05.380516 sshd[1614]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:05.383855 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:52402.service: Deactivated successfully. Feb 13 19:59:05.386435 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:59:05.386671 systemd[1]: session-7.scope: Consumed 8.069s CPU time, 150.1M memory peak, 0B memory swap peak. Feb 13 19:59:05.387214 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:59:05.388090 systemd-logind[1419]: Removed session 7. Feb 13 19:59:05.811107 kubelet[2465]: I0213 19:59:05.811067 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ec1107f-a27c-41f1-8b25-94b8626fcac2-xtables-lock\") pod \"kube-proxy-rfphc\" (UID: \"9ec1107f-a27c-41f1-8b25-94b8626fcac2\") " pod="kube-system/kube-proxy-rfphc" Feb 13 19:59:05.811107 kubelet[2465]: I0213 19:59:05.811107 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ec1107f-a27c-41f1-8b25-94b8626fcac2-lib-modules\") pod \"kube-proxy-rfphc\" (UID: \"9ec1107f-a27c-41f1-8b25-94b8626fcac2\") " pod="kube-system/kube-proxy-rfphc" Feb 13 19:59:05.811251 kubelet[2465]: I0213 19:59:05.811136 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ec1107f-a27c-41f1-8b25-94b8626fcac2-kube-proxy\") pod \"kube-proxy-rfphc\" (UID: \"9ec1107f-a27c-41f1-8b25-94b8626fcac2\") " pod="kube-system/kube-proxy-rfphc" Feb 13 19:59:05.811251 kubelet[2465]: I0213 19:59:05.811158 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24brs\" (UniqueName: \"kubernetes.io/projected/9ec1107f-a27c-41f1-8b25-94b8626fcac2-kube-api-access-24brs\") pod \"kube-proxy-rfphc\" (UID: \"9ec1107f-a27c-41f1-8b25-94b8626fcac2\") " pod="kube-system/kube-proxy-rfphc" Feb 13 19:59:05.811675 systemd[1]: Created slice kubepods-besteffort-pod9ec1107f_a27c_41f1_8b25_94b8626fcac2.slice - libcontainer container kubepods-besteffort-pod9ec1107f_a27c_41f1_8b25_94b8626fcac2.slice. Feb 13 19:59:06.126489 kubelet[2465]: E0213 19:59:06.125888 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:06.126901 containerd[1437]: time="2025-02-13T19:59:06.126573854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfphc,Uid:9ec1107f-a27c-41f1-8b25-94b8626fcac2,Namespace:kube-system,Attempt:0,}" Feb 13 19:59:06.149262 containerd[1437]: time="2025-02-13T19:59:06.149159808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:06.149262 containerd[1437]: time="2025-02-13T19:59:06.149232915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:06.149262 containerd[1437]: time="2025-02-13T19:59:06.149312184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:06.150664 containerd[1437]: time="2025-02-13T19:59:06.150156772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:06.175747 systemd[1]: Started cri-containerd-e5b1e23bedeb6834ff646c20db0f7bdb33be4ea40090de80081ca5ff87fce7dc.scope - libcontainer container e5b1e23bedeb6834ff646c20db0f7bdb33be4ea40090de80081ca5ff87fce7dc. Feb 13 19:59:06.204723 containerd[1437]: time="2025-02-13T19:59:06.204602182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfphc,Uid:9ec1107f-a27c-41f1-8b25-94b8626fcac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5b1e23bedeb6834ff646c20db0f7bdb33be4ea40090de80081ca5ff87fce7dc\"" Feb 13 19:59:06.207658 kubelet[2465]: E0213 19:59:06.207616 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:06.210664 containerd[1437]: time="2025-02-13T19:59:06.210470761Z" level=info msg="CreateContainer within sandbox \"e5b1e23bedeb6834ff646c20db0f7bdb33be4ea40090de80081ca5ff87fce7dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:59:06.236486 systemd[1]: Created slice kubepods-besteffort-podd9a804d5_315c_425a_96c3_ff34bd5b2343.slice - libcontainer container kubepods-besteffort-podd9a804d5_315c_425a_96c3_ff34bd5b2343.slice. Feb 13 19:59:06.244973 containerd[1437]: time="2025-02-13T19:59:06.244920961Z" level=info msg="CreateContainer within sandbox \"e5b1e23bedeb6834ff646c20db0f7bdb33be4ea40090de80081ca5ff87fce7dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd1973385adc5d99f245a94bfb8177c4b3bc078797f3aa04ec8bd000205fdaaa\"" Feb 13 19:59:06.246065 containerd[1437]: time="2025-02-13T19:59:06.246036928Z" level=info msg="StartContainer for \"fd1973385adc5d99f245a94bfb8177c4b3bc078797f3aa04ec8bd000205fdaaa\"" Feb 13 19:59:06.268721 systemd[1]: Started cri-containerd-fd1973385adc5d99f245a94bfb8177c4b3bc078797f3aa04ec8bd000205fdaaa.scope - libcontainer container fd1973385adc5d99f245a94bfb8177c4b3bc078797f3aa04ec8bd000205fdaaa. Feb 13 19:59:06.295266 containerd[1437]: time="2025-02-13T19:59:06.295217218Z" level=info msg="StartContainer for \"fd1973385adc5d99f245a94bfb8177c4b3bc078797f3aa04ec8bd000205fdaaa\" returns successfully" Feb 13 19:59:06.315095 kubelet[2465]: I0213 19:59:06.315039 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9a804d5-315c-425a-96c3-ff34bd5b2343-var-lib-calico\") pod \"tigera-operator-76c4976dd7-6tk25\" (UID: \"d9a804d5-315c-425a-96c3-ff34bd5b2343\") " pod="tigera-operator/tigera-operator-76c4976dd7-6tk25" Feb 13 19:59:06.315095 kubelet[2465]: I0213 19:59:06.315086 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9h8\" (UniqueName: \"kubernetes.io/projected/d9a804d5-315c-425a-96c3-ff34bd5b2343-kube-api-access-9v9h8\") pod \"tigera-operator-76c4976dd7-6tk25\" (UID: \"d9a804d5-315c-425a-96c3-ff34bd5b2343\") " pod="tigera-operator/tigera-operator-76c4976dd7-6tk25" Feb 13 19:59:06.375037 kubelet[2465]: E0213 19:59:06.374995 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:06.542190 containerd[1437]: time="2025-02-13T19:59:06.542071177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-6tk25,Uid:d9a804d5-315c-425a-96c3-ff34bd5b2343,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:59:06.563081 containerd[1437]: time="2025-02-13T19:59:06.562950990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:06.563081 containerd[1437]: time="2025-02-13T19:59:06.563012812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:06.563081 containerd[1437]: time="2025-02-13T19:59:06.563035500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:06.563345 containerd[1437]: time="2025-02-13T19:59:06.563129455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:06.584779 systemd[1]: Started cri-containerd-0e8d0ebd14e0c50451bdb191e2ac0cfd5266a9a9268cbde8e8f208d185b86182.scope - libcontainer container 0e8d0ebd14e0c50451bdb191e2ac0cfd5266a9a9268cbde8e8f208d185b86182. Feb 13 19:59:06.613846 containerd[1437]: time="2025-02-13T19:59:06.613804890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-6tk25,Uid:d9a804d5-315c-425a-96c3-ff34bd5b2343,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e8d0ebd14e0c50451bdb191e2ac0cfd5266a9a9268cbde8e8f208d185b86182\"" Feb 13 19:59:06.615825 containerd[1437]: time="2025-02-13T19:59:06.615792295Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:59:06.926492 kubelet[2465]: E0213 19:59:06.925670 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:07.927522 kubelet[2465]: E0213 19:59:07.927494 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:09.610545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954039213.mount: Deactivated successfully. Feb 13 19:59:09.853562 containerd[1437]: time="2025-02-13T19:59:09.853511756Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:09.854561 containerd[1437]: time="2025-02-13T19:59:09.854341014Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 19:59:09.855771 containerd[1437]: time="2025-02-13T19:59:09.855727806Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:09.857947 containerd[1437]: time="2025-02-13T19:59:09.857907365Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:09.859135 containerd[1437]: time="2025-02-13T19:59:09.859099097Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.243269629s" Feb 13 19:59:09.859224 containerd[1437]: time="2025-02-13T19:59:09.859208411Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 19:59:09.866444 containerd[1437]: time="2025-02-13T19:59:09.866343834Z" level=info msg="CreateContainer within sandbox \"0e8d0ebd14e0c50451bdb191e2ac0cfd5266a9a9268cbde8e8f208d185b86182\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:59:09.876166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708024952.mount: Deactivated successfully. Feb 13 19:59:09.877901 containerd[1437]: time="2025-02-13T19:59:09.877804604Z" level=info msg="CreateContainer within sandbox \"0e8d0ebd14e0c50451bdb191e2ac0cfd5266a9a9268cbde8e8f208d185b86182\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"96f0f0b6b45aa7fdb0cc4da699cd4fd3d4779656c41c75156ce74ee2c1742d2d\"" Feb 13 19:59:09.878423 containerd[1437]: time="2025-02-13T19:59:09.878273590Z" level=info msg="StartContainer for \"96f0f0b6b45aa7fdb0cc4da699cd4fd3d4779656c41c75156ce74ee2c1742d2d\"" Feb 13 19:59:09.901786 systemd[1]: Started cri-containerd-96f0f0b6b45aa7fdb0cc4da699cd4fd3d4779656c41c75156ce74ee2c1742d2d.scope - libcontainer container 96f0f0b6b45aa7fdb0cc4da699cd4fd3d4779656c41c75156ce74ee2c1742d2d. Feb 13 19:59:09.936976 containerd[1437]: time="2025-02-13T19:59:09.936884610Z" level=info msg="StartContainer for \"96f0f0b6b45aa7fdb0cc4da699cd4fd3d4779656c41c75156ce74ee2c1742d2d\" returns successfully" Feb 13 19:59:09.948356 kubelet[2465]: I0213 19:59:09.948212 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rfphc" podStartSLOduration=4.948077857 podStartE2EDuration="4.948077857s" podCreationTimestamp="2025-02-13 19:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:59:06.934382047 +0000 UTC m=+6.115145878" watchObservedRunningTime="2025-02-13 19:59:09.948077857 +0000 UTC m=+9.128841688" Feb 13 19:59:09.948895 kubelet[2465]: I0213 19:59:09.948847 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-6tk25" podStartSLOduration=0.699507977 podStartE2EDuration="3.948836653s" podCreationTimestamp="2025-02-13 19:59:06 +0000 UTC" firstStartedPulling="2025-02-13 19:59:06.61506627 +0000 UTC m=+5.795830061" lastFinishedPulling="2025-02-13 19:59:09.864394906 +0000 UTC m=+9.045158737" observedRunningTime="2025-02-13 19:59:09.947463145 +0000 UTC m=+9.128226976" watchObservedRunningTime="2025-02-13 19:59:09.948836653 +0000 UTC m=+9.129600524" Feb 13 19:59:10.402286 kubelet[2465]: E0213 19:59:10.402254 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:10.943053 kubelet[2465]: E0213 19:59:10.943017 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:11.085820 kubelet[2465]: E0213 19:59:11.085780 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:11.107489 update_engine[1424]: I20250213 19:59:11.107430 1424 update_attempter.cc:509] Updating boot flags... Feb 13 19:59:11.129622 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2856) Feb 13 19:59:11.169607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2856) Feb 13 19:59:11.192664 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2856) Feb 13 19:59:11.944956 kubelet[2465]: E0213 19:59:11.944879 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:13.288146 systemd[1]: Created slice kubepods-besteffort-pod5a79227a_66cb_4f8d_9ffe_99ad9a717416.slice - libcontainer container kubepods-besteffort-pod5a79227a_66cb_4f8d_9ffe_99ad9a717416.slice. Feb 13 19:59:13.354857 systemd[1]: Created slice kubepods-besteffort-pod16f1e8e0_3980_4ce6_9d67_b5d2fe600a82.slice - libcontainer container kubepods-besteffort-pod16f1e8e0_3980_4ce6_9d67_b5d2fe600a82.slice. Feb 13 19:59:13.361516 kubelet[2465]: I0213 19:59:13.361007 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbr6\" (UniqueName: \"kubernetes.io/projected/5a79227a-66cb-4f8d-9ffe-99ad9a717416-kube-api-access-ftbr6\") pod \"calico-typha-56469796cb-nkd8s\" (UID: \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\") " pod="calico-system/calico-typha-56469796cb-nkd8s" Feb 13 19:59:13.361516 kubelet[2465]: I0213 19:59:13.361046 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-node-certs\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.361516 kubelet[2465]: I0213 19:59:13.361067 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-var-lib-calico\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.361516 kubelet[2465]: I0213 19:59:13.361084 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-cni-net-dir\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.361516 kubelet[2465]: I0213 19:59:13.361099 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-cni-log-dir\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364276 kubelet[2465]: I0213 19:59:13.361116 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcmn5\" (UniqueName: \"kubernetes.io/projected/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-kube-api-access-xcmn5\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364276 kubelet[2465]: I0213 19:59:13.361166 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-lib-modules\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364276 kubelet[2465]: I0213 19:59:13.361204 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5a79227a-66cb-4f8d-9ffe-99ad9a717416-typha-certs\") pod \"calico-typha-56469796cb-nkd8s\" (UID: \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\") " pod="calico-system/calico-typha-56469796cb-nkd8s" Feb 13 19:59:13.364276 kubelet[2465]: I0213 19:59:13.361251 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-policysync\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364276 kubelet[2465]: I0213 19:59:13.361300 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-cni-bin-dir\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364446 kubelet[2465]: I0213 19:59:13.361317 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-flexvol-driver-host\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364446 kubelet[2465]: I0213 19:59:13.361400 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-tigera-ca-bundle\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364446 kubelet[2465]: I0213 19:59:13.361429 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-var-run-calico\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364446 kubelet[2465]: I0213 19:59:13.361452 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16f1e8e0-3980-4ce6-9d67-b5d2fe600a82-xtables-lock\") pod \"calico-node-6qx8c\" (UID: \"16f1e8e0-3980-4ce6-9d67-b5d2fe600a82\") " pod="calico-system/calico-node-6qx8c" Feb 13 19:59:13.364446 kubelet[2465]: I0213 19:59:13.362353 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a79227a-66cb-4f8d-9ffe-99ad9a717416-tigera-ca-bundle\") pod \"calico-typha-56469796cb-nkd8s\" (UID: \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\") " pod="calico-system/calico-typha-56469796cb-nkd8s" Feb 13 19:59:13.450198 kubelet[2465]: E0213 19:59:13.450132 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7254x" podUID="968a0298-0e67-414e-9ce9-912c9a8051e6" Feb 13 19:59:13.463602 kubelet[2465]: I0213 19:59:13.463548 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/968a0298-0e67-414e-9ce9-912c9a8051e6-kubelet-dir\") pod \"csi-node-driver-7254x\" (UID: \"968a0298-0e67-414e-9ce9-912c9a8051e6\") " pod="calico-system/csi-node-driver-7254x" Feb 13 19:59:13.463602 kubelet[2465]: I0213 19:59:13.463594 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/968a0298-0e67-414e-9ce9-912c9a8051e6-varrun\") pod \"csi-node-driver-7254x\" (UID: \"968a0298-0e67-414e-9ce9-912c9a8051e6\") " pod="calico-system/csi-node-driver-7254x" Feb 13 19:59:13.463752 kubelet[2465]: I0213 19:59:13.463676 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/968a0298-0e67-414e-9ce9-912c9a8051e6-socket-dir\") pod \"csi-node-driver-7254x\" (UID: \"968a0298-0e67-414e-9ce9-912c9a8051e6\") " pod="calico-system/csi-node-driver-7254x" Feb 13 19:59:13.463752 kubelet[2465]: I0213 19:59:13.463726 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/968a0298-0e67-414e-9ce9-912c9a8051e6-registration-dir\") pod \"csi-node-driver-7254x\" (UID: \"968a0298-0e67-414e-9ce9-912c9a8051e6\") " pod="calico-system/csi-node-driver-7254x" Feb 13 19:59:13.463752 kubelet[2465]: I0213 19:59:13.463750 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dmh5\" (UniqueName: \"kubernetes.io/projected/968a0298-0e67-414e-9ce9-912c9a8051e6-kube-api-access-6dmh5\") pod \"csi-node-driver-7254x\" (UID: \"968a0298-0e67-414e-9ce9-912c9a8051e6\") " pod="calico-system/csi-node-driver-7254x" Feb 13 19:59:13.473222 kubelet[2465]: E0213 19:59:13.472340 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.473222 kubelet[2465]: W0213 19:59:13.472373 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.473222 kubelet[2465]: E0213 19:59:13.472418 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.473222 kubelet[2465]: E0213 19:59:13.472699 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.473222 kubelet[2465]: W0213 19:59:13.472711 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.473222 kubelet[2465]: E0213 19:59:13.472722 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.473222 kubelet[2465]: E0213 19:59:13.473010 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.473222 kubelet[2465]: W0213 19:59:13.473027 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.473222 kubelet[2465]: E0213 19:59:13.473105 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.475810 kubelet[2465]: E0213 19:59:13.474351 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.475810 kubelet[2465]: W0213 19:59:13.474380 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.475810 kubelet[2465]: E0213 19:59:13.474469 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.475810 kubelet[2465]: E0213 19:59:13.474668 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.475810 kubelet[2465]: W0213 19:59:13.474681 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.475810 kubelet[2465]: E0213 19:59:13.474805 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.476181 kubelet[2465]: E0213 19:59:13.476154 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.476181 kubelet[2465]: W0213 19:59:13.476170 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.476254 kubelet[2465]: E0213 19:59:13.476211 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.476521 kubelet[2465]: E0213 19:59:13.476480 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.476521 kubelet[2465]: W0213 19:59:13.476503 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.477217 kubelet[2465]: E0213 19:59:13.477190 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.477440 kubelet[2465]: E0213 19:59:13.477414 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.477440 kubelet[2465]: W0213 19:59:13.477430 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.477524 kubelet[2465]: E0213 19:59:13.477457 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.478212 kubelet[2465]: E0213 19:59:13.478168 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.478212 kubelet[2465]: W0213 19:59:13.478184 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.478794 kubelet[2465]: E0213 19:59:13.478753 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.479045 kubelet[2465]: E0213 19:59:13.479016 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.479045 kubelet[2465]: W0213 19:59:13.479033 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.479377 kubelet[2465]: E0213 19:59:13.479301 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.479962 kubelet[2465]: E0213 19:59:13.479931 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.479962 kubelet[2465]: W0213 19:59:13.479951 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.480054 kubelet[2465]: E0213 19:59:13.479993 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.480734 kubelet[2465]: E0213 19:59:13.480709 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.480734 kubelet[2465]: W0213 19:59:13.480727 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.481042 kubelet[2465]: E0213 19:59:13.480978 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.482061 kubelet[2465]: E0213 19:59:13.482019 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.482061 kubelet[2465]: W0213 19:59:13.482043 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.482169 kubelet[2465]: E0213 19:59:13.482095 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.482823 kubelet[2465]: E0213 19:59:13.482749 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.482823 kubelet[2465]: W0213 19:59:13.482772 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.482940 kubelet[2465]: E0213 19:59:13.482915 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.483241 kubelet[2465]: E0213 19:59:13.483204 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.483241 kubelet[2465]: W0213 19:59:13.483226 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.483323 kubelet[2465]: E0213 19:59:13.483267 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.484032 kubelet[2465]: E0213 19:59:13.483995 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.484032 kubelet[2465]: W0213 19:59:13.484015 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.484125 kubelet[2465]: E0213 19:59:13.484067 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.485181 kubelet[2465]: E0213 19:59:13.485009 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.485181 kubelet[2465]: W0213 19:59:13.485028 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.485278 kubelet[2465]: E0213 19:59:13.485199 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.485278 kubelet[2465]: E0213 19:59:13.485253 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.485278 kubelet[2465]: W0213 19:59:13.485264 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.485515 kubelet[2465]: E0213 19:59:13.485403 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.486564 kubelet[2465]: E0213 19:59:13.485760 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.486564 kubelet[2465]: W0213 19:59:13.485777 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.486564 kubelet[2465]: E0213 19:59:13.485789 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.486733 kubelet[2465]: E0213 19:59:13.486623 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.486733 kubelet[2465]: W0213 19:59:13.486640 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.486733 kubelet[2465]: E0213 19:59:13.486657 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.486870 kubelet[2465]: E0213 19:59:13.486848 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.486870 kubelet[2465]: W0213 19:59:13.486860 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.486870 kubelet[2465]: E0213 19:59:13.486870 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.487051 kubelet[2465]: E0213 19:59:13.487033 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.487051 kubelet[2465]: W0213 19:59:13.487046 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.487110 kubelet[2465]: E0213 19:59:13.487073 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.487211 kubelet[2465]: E0213 19:59:13.487190 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.487211 kubelet[2465]: W0213 19:59:13.487202 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.487400 kubelet[2465]: E0213 19:59:13.487228 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.487400 kubelet[2465]: E0213 19:59:13.487363 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.487400 kubelet[2465]: W0213 19:59:13.487371 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.487464 kubelet[2465]: E0213 19:59:13.487418 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.488621 kubelet[2465]: E0213 19:59:13.487517 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.488621 kubelet[2465]: W0213 19:59:13.487527 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.488621 kubelet[2465]: E0213 19:59:13.487578 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.488621 kubelet[2465]: E0213 19:59:13.487840 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.488621 kubelet[2465]: W0213 19:59:13.487856 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.488621 kubelet[2465]: E0213 19:59:13.487918 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.488621 kubelet[2465]: E0213 19:59:13.488450 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.488621 kubelet[2465]: W0213 19:59:13.488465 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.488621 kubelet[2465]: E0213 19:59:13.488540 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.489459 kubelet[2465]: E0213 19:59:13.489432 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.489459 kubelet[2465]: W0213 19:59:13.489450 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.489556 kubelet[2465]: E0213 19:59:13.489516 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.489713 kubelet[2465]: E0213 19:59:13.489688 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.489713 kubelet[2465]: W0213 19:59:13.489704 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.489801 kubelet[2465]: E0213 19:59:13.489782 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.489870 kubelet[2465]: E0213 19:59:13.489850 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.489870 kubelet[2465]: W0213 19:59:13.489861 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.489950 kubelet[2465]: E0213 19:59:13.489930 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.490152 kubelet[2465]: E0213 19:59:13.489991 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.490152 kubelet[2465]: W0213 19:59:13.489999 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.490152 kubelet[2465]: E0213 19:59:13.490060 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.490152 kubelet[2465]: E0213 19:59:13.490121 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.490152 kubelet[2465]: W0213 19:59:13.490127 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.490279 kubelet[2465]: E0213 19:59:13.490167 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.490279 kubelet[2465]: E0213 19:59:13.490251 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.490279 kubelet[2465]: W0213 19:59:13.490259 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.490345 kubelet[2465]: E0213 19:59:13.490290 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.490596 kubelet[2465]: E0213 19:59:13.490398 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.490596 kubelet[2465]: W0213 19:59:13.490408 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.490596 kubelet[2465]: E0213 19:59:13.490437 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.490596 kubelet[2465]: E0213 19:59:13.490551 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.490596 kubelet[2465]: W0213 19:59:13.490561 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.490596 kubelet[2465]: E0213 19:59:13.490604 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.490985 kubelet[2465]: E0213 19:59:13.490723 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.490985 kubelet[2465]: W0213 19:59:13.490732 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.490985 kubelet[2465]: E0213 19:59:13.490762 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.490985 kubelet[2465]: E0213 19:59:13.490860 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.490985 kubelet[2465]: W0213 19:59:13.490868 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.490985 kubelet[2465]: E0213 19:59:13.490892 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.491109 kubelet[2465]: E0213 19:59:13.490994 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.491109 kubelet[2465]: W0213 19:59:13.491001 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.491109 kubelet[2465]: E0213 19:59:13.491072 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.491172 kubelet[2465]: E0213 19:59:13.491159 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.491195 kubelet[2465]: W0213 19:59:13.491172 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.491219 kubelet[2465]: E0213 19:59:13.491212 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.492390 kubelet[2465]: E0213 19:59:13.491412 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.492390 kubelet[2465]: W0213 19:59:13.491429 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.492390 kubelet[2465]: E0213 19:59:13.491464 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.492390 kubelet[2465]: E0213 19:59:13.491644 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.492390 kubelet[2465]: W0213 19:59:13.491654 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.492390 kubelet[2465]: E0213 19:59:13.491756 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.492390 kubelet[2465]: E0213 19:59:13.491936 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.492390 kubelet[2465]: W0213 19:59:13.491948 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.492390 kubelet[2465]: E0213 19:59:13.492061 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.492390 kubelet[2465]: E0213 19:59:13.492231 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.492982 kubelet[2465]: W0213 19:59:13.492243 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.492982 kubelet[2465]: E0213 19:59:13.492274 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.492982 kubelet[2465]: E0213 19:59:13.492402 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.492982 kubelet[2465]: W0213 19:59:13.492411 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.492982 kubelet[2465]: E0213 19:59:13.492440 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.492982 kubelet[2465]: E0213 19:59:13.492598 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.492982 kubelet[2465]: W0213 19:59:13.492608 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.492982 kubelet[2465]: E0213 19:59:13.492642 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.492982 kubelet[2465]: E0213 19:59:13.492788 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.492982 kubelet[2465]: W0213 19:59:13.492798 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.493235 kubelet[2465]: E0213 19:59:13.492825 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.494601 kubelet[2465]: E0213 19:59:13.494029 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.494601 kubelet[2465]: W0213 19:59:13.494058 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.494601 kubelet[2465]: E0213 19:59:13.494123 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.494601 kubelet[2465]: E0213 19:59:13.494286 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.494601 kubelet[2465]: W0213 19:59:13.494297 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.494601 kubelet[2465]: E0213 19:59:13.494329 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.494601 kubelet[2465]: E0213 19:59:13.494542 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.494601 kubelet[2465]: W0213 19:59:13.494554 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.494601 kubelet[2465]: E0213 19:59:13.494565 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.498437 kubelet[2465]: E0213 19:59:13.498150 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.498437 kubelet[2465]: W0213 19:59:13.498167 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.498437 kubelet[2465]: E0213 19:59:13.498181 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.506146 kubelet[2465]: E0213 19:59:13.506120 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.506146 kubelet[2465]: W0213 19:59:13.506140 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.506268 kubelet[2465]: E0213 19:59:13.506155 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.516146 kubelet[2465]: E0213 19:59:13.516114 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.516146 kubelet[2465]: W0213 19:59:13.516132 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.516146 kubelet[2465]: E0213 19:59:13.516146 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.564169 kubelet[2465]: E0213 19:59:13.564137 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.564169 kubelet[2465]: W0213 19:59:13.564157 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.564169 kubelet[2465]: E0213 19:59:13.564172 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.564416 kubelet[2465]: E0213 19:59:13.564385 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.564416 kubelet[2465]: W0213 19:59:13.564399 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.564416 kubelet[2465]: E0213 19:59:13.564417 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.564669 kubelet[2465]: E0213 19:59:13.564645 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.564669 kubelet[2465]: W0213 19:59:13.564660 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.564717 kubelet[2465]: E0213 19:59:13.564674 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.564866 kubelet[2465]: E0213 19:59:13.564845 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.564866 kubelet[2465]: W0213 19:59:13.564858 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.564922 kubelet[2465]: E0213 19:59:13.564870 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.565091 kubelet[2465]: E0213 19:59:13.565068 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.565091 kubelet[2465]: W0213 19:59:13.565082 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.565144 kubelet[2465]: E0213 19:59:13.565095 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.565275 kubelet[2465]: E0213 19:59:13.565261 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.565302 kubelet[2465]: W0213 19:59:13.565275 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.565326 kubelet[2465]: E0213 19:59:13.565308 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.565515 kubelet[2465]: E0213 19:59:13.565496 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.565539 kubelet[2465]: W0213 19:59:13.565516 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.565563 kubelet[2465]: E0213 19:59:13.565541 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.565748 kubelet[2465]: E0213 19:59:13.565735 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.565780 kubelet[2465]: W0213 19:59:13.565748 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.565802 kubelet[2465]: E0213 19:59:13.565773 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.565918 kubelet[2465]: E0213 19:59:13.565906 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.565939 kubelet[2465]: W0213 19:59:13.565917 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.565963 kubelet[2465]: E0213 19:59:13.565938 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.566072 kubelet[2465]: E0213 19:59:13.566062 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.566100 kubelet[2465]: W0213 19:59:13.566074 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.566100 kubelet[2465]: E0213 19:59:13.566095 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.566218 kubelet[2465]: E0213 19:59:13.566208 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.566239 kubelet[2465]: W0213 19:59:13.566218 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.566259 kubelet[2465]: E0213 19:59:13.566252 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.566372 kubelet[2465]: E0213 19:59:13.566361 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.566398 kubelet[2465]: W0213 19:59:13.566372 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.566450 kubelet[2465]: E0213 19:59:13.566439 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.566578 kubelet[2465]: E0213 19:59:13.566565 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.566613 kubelet[2465]: W0213 19:59:13.566578 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.566613 kubelet[2465]: E0213 19:59:13.566601 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.566766 kubelet[2465]: E0213 19:59:13.566754 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.566794 kubelet[2465]: W0213 19:59:13.566765 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.566794 kubelet[2465]: E0213 19:59:13.566778 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.566920 kubelet[2465]: E0213 19:59:13.566910 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.566942 kubelet[2465]: W0213 19:59:13.566920 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.566942 kubelet[2465]: E0213 19:59:13.566929 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.567132 kubelet[2465]: E0213 19:59:13.567117 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.567132 kubelet[2465]: W0213 19:59:13.567129 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.567232 kubelet[2465]: E0213 19:59:13.567142 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.567330 kubelet[2465]: E0213 19:59:13.567316 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.567330 kubelet[2465]: W0213 19:59:13.567328 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.567373 kubelet[2465]: E0213 19:59:13.567346 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.567543 kubelet[2465]: E0213 19:59:13.567530 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.567543 kubelet[2465]: W0213 19:59:13.567541 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.567605 kubelet[2465]: E0213 19:59:13.567562 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.567731 kubelet[2465]: E0213 19:59:13.567718 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.567753 kubelet[2465]: W0213 19:59:13.567730 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.567773 kubelet[2465]: E0213 19:59:13.567751 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.567895 kubelet[2465]: E0213 19:59:13.567883 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.567922 kubelet[2465]: W0213 19:59:13.567895 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.567922 kubelet[2465]: E0213 19:59:13.567913 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.568056 kubelet[2465]: E0213 19:59:13.568045 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.568077 kubelet[2465]: W0213 19:59:13.568056 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.568077 kubelet[2465]: E0213 19:59:13.568068 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.568217 kubelet[2465]: E0213 19:59:13.568206 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.568243 kubelet[2465]: W0213 19:59:13.568217 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.568243 kubelet[2465]: E0213 19:59:13.568235 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.568388 kubelet[2465]: E0213 19:59:13.568377 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.568409 kubelet[2465]: W0213 19:59:13.568388 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.568409 kubelet[2465]: E0213 19:59:13.568399 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.568691 kubelet[2465]: E0213 19:59:13.568672 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.568691 kubelet[2465]: W0213 19:59:13.568689 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.568756 kubelet[2465]: E0213 19:59:13.568701 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.569525 kubelet[2465]: E0213 19:59:13.568998 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.569525 kubelet[2465]: W0213 19:59:13.569032 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.569525 kubelet[2465]: E0213 19:59:13.569045 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.580566 kubelet[2465]: E0213 19:59:13.580536 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:13.580566 kubelet[2465]: W0213 19:59:13.580558 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:13.580566 kubelet[2465]: E0213 19:59:13.580572 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:13.592633 kubelet[2465]: E0213 19:59:13.592605 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:13.593019 containerd[1437]: time="2025-02-13T19:59:13.592984328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56469796cb-nkd8s,Uid:5a79227a-66cb-4f8d-9ffe-99ad9a717416,Namespace:calico-system,Attempt:0,}" Feb 13 19:59:13.613665 containerd[1437]: time="2025-02-13T19:59:13.613505762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:13.613813 containerd[1437]: time="2025-02-13T19:59:13.613580421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:13.613813 containerd[1437]: time="2025-02-13T19:59:13.613656120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:13.613813 containerd[1437]: time="2025-02-13T19:59:13.613752425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:13.633784 systemd[1]: Started cri-containerd-42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2.scope - libcontainer container 42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2. Feb 13 19:59:13.658741 kubelet[2465]: E0213 19:59:13.658709 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:13.659252 containerd[1437]: time="2025-02-13T19:59:13.659133679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qx8c,Uid:16f1e8e0-3980-4ce6-9d67-b5d2fe600a82,Namespace:calico-system,Attempt:0,}" Feb 13 19:59:13.660904 containerd[1437]: time="2025-02-13T19:59:13.660874003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56469796cb-nkd8s,Uid:5a79227a-66cb-4f8d-9ffe-99ad9a717416,Namespace:calico-system,Attempt:0,} returns sandbox id \"42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2\"" Feb 13 19:59:13.682387 kubelet[2465]: E0213 19:59:13.682362 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:13.692423 containerd[1437]: time="2025-02-13T19:59:13.692385680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:59:13.698677 containerd[1437]: time="2025-02-13T19:59:13.698112380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:13.698677 containerd[1437]: time="2025-02-13T19:59:13.698168274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:13.698677 containerd[1437]: time="2025-02-13T19:59:13.698182838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:13.698677 containerd[1437]: time="2025-02-13T19:59:13.698265139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:13.719749 systemd[1]: Started cri-containerd-47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277.scope - libcontainer container 47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277. Feb 13 19:59:13.749774 containerd[1437]: time="2025-02-13T19:59:13.749729384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qx8c,Uid:16f1e8e0-3980-4ce6-9d67-b5d2fe600a82,Namespace:calico-system,Attempt:0,} returns sandbox id \"47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277\"" Feb 13 19:59:13.750607 kubelet[2465]: E0213 19:59:13.750382 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:14.819055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287810417.mount: Deactivated successfully. Feb 13 19:59:14.903412 kubelet[2465]: E0213 19:59:14.903342 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7254x" podUID="968a0298-0e67-414e-9ce9-912c9a8051e6" Feb 13 19:59:15.231858 containerd[1437]: time="2025-02-13T19:59:15.231755810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:15.232744 containerd[1437]: time="2025-02-13T19:59:15.232548473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 19:59:15.236918 containerd[1437]: time="2025-02-13T19:59:15.236880078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.544452387s" Feb 13 19:59:15.237000 containerd[1437]: time="2025-02-13T19:59:15.236920807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 19:59:15.239443 containerd[1437]: time="2025-02-13T19:59:15.239376136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:59:15.250846 containerd[1437]: time="2025-02-13T19:59:15.250809747Z" level=info msg="CreateContainer within sandbox \"42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:59:15.257133 containerd[1437]: time="2025-02-13T19:59:15.257091323Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:15.257850 containerd[1437]: time="2025-02-13T19:59:15.257799607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:15.262760 containerd[1437]: time="2025-02-13T19:59:15.262710945Z" level=info msg="CreateContainer within sandbox \"42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\"" Feb 13 19:59:15.263138 containerd[1437]: time="2025-02-13T19:59:15.263112758Z" level=info msg="StartContainer for \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\"" Feb 13 19:59:15.287762 systemd[1]: Started cri-containerd-895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e.scope - libcontainer container 895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e. Feb 13 19:59:15.327692 containerd[1437]: time="2025-02-13T19:59:15.325820655Z" level=info msg="StartContainer for \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\" returns successfully" Feb 13 19:59:15.992634 kubelet[2465]: E0213 19:59:15.992605 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:16.020806 kubelet[2465]: I0213 19:59:16.020640 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56469796cb-nkd8s" podStartSLOduration=1.47054015 podStartE2EDuration="3.020622944s" podCreationTimestamp="2025-02-13 19:59:13 +0000 UTC" firstStartedPulling="2025-02-13 19:59:13.688058616 +0000 UTC m=+12.868822447" lastFinishedPulling="2025-02-13 19:59:15.23814141 +0000 UTC m=+14.418905241" observedRunningTime="2025-02-13 19:59:16.020497916 +0000 UTC m=+15.201261747" watchObservedRunningTime="2025-02-13 19:59:16.020622944 +0000 UTC m=+15.201386775" Feb 13 19:59:16.077666 kubelet[2465]: E0213 19:59:16.077625 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.077666 kubelet[2465]: W0213 19:59:16.077653 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.077666 kubelet[2465]: E0213 19:59:16.077674 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.077858 kubelet[2465]: E0213 19:59:16.077841 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.077858 kubelet[2465]: W0213 19:59:16.077850 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.077907 kubelet[2465]: E0213 19:59:16.077858 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.078028 kubelet[2465]: E0213 19:59:16.078002 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.078028 kubelet[2465]: W0213 19:59:16.078014 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.078028 kubelet[2465]: E0213 19:59:16.078022 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.078189 kubelet[2465]: E0213 19:59:16.078170 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.078189 kubelet[2465]: W0213 19:59:16.078183 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.078243 kubelet[2465]: E0213 19:59:16.078191 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.078361 kubelet[2465]: E0213 19:59:16.078342 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.078361 kubelet[2465]: W0213 19:59:16.078355 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.078417 kubelet[2465]: E0213 19:59:16.078362 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.078494 kubelet[2465]: E0213 19:59:16.078484 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.078518 kubelet[2465]: W0213 19:59:16.078499 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.078518 kubelet[2465]: E0213 19:59:16.078508 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.078649 kubelet[2465]: E0213 19:59:16.078638 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.078679 kubelet[2465]: W0213 19:59:16.078648 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.078679 kubelet[2465]: E0213 19:59:16.078656 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.078848 kubelet[2465]: E0213 19:59:16.078825 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.078848 kubelet[2465]: W0213 19:59:16.078840 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.078848 kubelet[2465]: E0213 19:59:16.078849 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.079014 kubelet[2465]: E0213 19:59:16.079001 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.079044 kubelet[2465]: W0213 19:59:16.079013 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.079044 kubelet[2465]: E0213 19:59:16.079021 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.079170 kubelet[2465]: E0213 19:59:16.079158 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.079192 kubelet[2465]: W0213 19:59:16.079168 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.079192 kubelet[2465]: E0213 19:59:16.079178 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.079313 kubelet[2465]: E0213 19:59:16.079302 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.079338 kubelet[2465]: W0213 19:59:16.079317 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.079338 kubelet[2465]: E0213 19:59:16.079324 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.079466 kubelet[2465]: E0213 19:59:16.079455 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.079487 kubelet[2465]: W0213 19:59:16.079470 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.079487 kubelet[2465]: E0213 19:59:16.079477 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.079633 kubelet[2465]: E0213 19:59:16.079622 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.079663 kubelet[2465]: W0213 19:59:16.079633 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.079663 kubelet[2465]: E0213 19:59:16.079641 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.079779 kubelet[2465]: E0213 19:59:16.079767 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.079804 kubelet[2465]: W0213 19:59:16.079781 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.079804 kubelet[2465]: E0213 19:59:16.079790 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.079931 kubelet[2465]: E0213 19:59:16.079921 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.079956 kubelet[2465]: W0213 19:59:16.079935 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.079956 kubelet[2465]: E0213 19:59:16.079943 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.087319 kubelet[2465]: E0213 19:59:16.087294 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.087319 kubelet[2465]: W0213 19:59:16.087314 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.087377 kubelet[2465]: E0213 19:59:16.087328 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.087559 kubelet[2465]: E0213 19:59:16.087547 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.087559 kubelet[2465]: W0213 19:59:16.087558 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.087638 kubelet[2465]: E0213 19:59:16.087572 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.087793 kubelet[2465]: E0213 19:59:16.087775 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.087793 kubelet[2465]: W0213 19:59:16.087792 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.087857 kubelet[2465]: E0213 19:59:16.087809 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.088019 kubelet[2465]: E0213 19:59:16.087998 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.088019 kubelet[2465]: W0213 19:59:16.088010 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.088116 kubelet[2465]: E0213 19:59:16.088022 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.088191 kubelet[2465]: E0213 19:59:16.088178 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.088191 kubelet[2465]: W0213 19:59:16.088190 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.088249 kubelet[2465]: E0213 19:59:16.088209 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.088384 kubelet[2465]: E0213 19:59:16.088372 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.088384 kubelet[2465]: W0213 19:59:16.088384 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.088444 kubelet[2465]: E0213 19:59:16.088398 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.088607 kubelet[2465]: E0213 19:59:16.088571 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.088645 kubelet[2465]: W0213 19:59:16.088607 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.088645 kubelet[2465]: E0213 19:59:16.088626 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.088836 kubelet[2465]: E0213 19:59:16.088820 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.088836 kubelet[2465]: W0213 19:59:16.088831 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.088916 kubelet[2465]: E0213 19:59:16.088860 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.088988 kubelet[2465]: E0213 19:59:16.088976 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.088988 kubelet[2465]: W0213 19:59:16.088986 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.089044 kubelet[2465]: E0213 19:59:16.089011 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.089193 kubelet[2465]: E0213 19:59:16.089177 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.089193 kubelet[2465]: W0213 19:59:16.089192 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.089262 kubelet[2465]: E0213 19:59:16.089206 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.089371 kubelet[2465]: E0213 19:59:16.089359 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.089371 kubelet[2465]: W0213 19:59:16.089371 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.089423 kubelet[2465]: E0213 19:59:16.089382 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.089524 kubelet[2465]: E0213 19:59:16.089509 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.089524 kubelet[2465]: W0213 19:59:16.089519 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.089524 kubelet[2465]: E0213 19:59:16.089530 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.089805 kubelet[2465]: E0213 19:59:16.089787 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.089961 kubelet[2465]: W0213 19:59:16.089857 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.089961 kubelet[2465]: E0213 19:59:16.089883 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.090114 kubelet[2465]: E0213 19:59:16.090093 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.090178 kubelet[2465]: W0213 19:59:16.090165 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.090241 kubelet[2465]: E0213 19:59:16.090229 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.090481 kubelet[2465]: E0213 19:59:16.090467 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.090610 kubelet[2465]: W0213 19:59:16.090534 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.090610 kubelet[2465]: E0213 19:59:16.090556 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.091024 kubelet[2465]: E0213 19:59:16.090762 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.091024 kubelet[2465]: W0213 19:59:16.090777 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.091024 kubelet[2465]: E0213 19:59:16.090788 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.091225 kubelet[2465]: E0213 19:59:16.091209 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.091281 kubelet[2465]: W0213 19:59:16.091269 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.091349 kubelet[2465]: E0213 19:59:16.091338 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.091572 kubelet[2465]: E0213 19:59:16.091553 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:59:16.091572 kubelet[2465]: W0213 19:59:16.091566 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:59:16.091703 kubelet[2465]: E0213 19:59:16.091575 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:59:16.358576 containerd[1437]: time="2025-02-13T19:59:16.358392202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:16.359454 containerd[1437]: time="2025-02-13T19:59:16.359229307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 19:59:16.360293 containerd[1437]: time="2025-02-13T19:59:16.360259535Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:16.362678 containerd[1437]: time="2025-02-13T19:59:16.362629779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:16.363330 containerd[1437]: time="2025-02-13T19:59:16.363290446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.123836691s" Feb 13 19:59:16.363379 containerd[1437]: time="2025-02-13T19:59:16.363328214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:59:16.365652 containerd[1437]: time="2025-02-13T19:59:16.365614160Z" level=info msg="CreateContainer within sandbox \"47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:59:16.378354 containerd[1437]: time="2025-02-13T19:59:16.378273441Z" level=info msg="CreateContainer within sandbox \"47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14\"" Feb 13 19:59:16.378759 containerd[1437]: time="2025-02-13T19:59:16.378727461Z" level=info msg="StartContainer for \"9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14\"" Feb 13 19:59:16.391345 kubelet[2465]: E0213 19:59:16.391313 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:16.420750 systemd[1]: Started cri-containerd-9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14.scope - libcontainer container 9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14. Feb 13 19:59:16.462881 systemd[1]: cri-containerd-9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14.scope: Deactivated successfully. Feb 13 19:59:16.494822 containerd[1437]: time="2025-02-13T19:59:16.494720847Z" level=info msg="StartContainer for \"9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14\" returns successfully" Feb 13 19:59:16.521009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14-rootfs.mount: Deactivated successfully. Feb 13 19:59:16.523872 containerd[1437]: time="2025-02-13T19:59:16.523815445Z" level=info msg="shim disconnected" id=9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14 namespace=k8s.io Feb 13 19:59:16.523872 containerd[1437]: time="2025-02-13T19:59:16.523869137Z" level=warning msg="cleaning up after shim disconnected" id=9af1c21a81d09eddc88442746ef977ff28c846f7cec4b2182e41e361ad4e0f14 namespace=k8s.io Feb 13 19:59:16.524015 containerd[1437]: time="2025-02-13T19:59:16.523888621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:16.895293 kubelet[2465]: E0213 19:59:16.895172 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7254x" podUID="968a0298-0e67-414e-9ce9-912c9a8051e6" Feb 13 19:59:16.995512 kubelet[2465]: I0213 19:59:16.995035 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:59:16.995512 kubelet[2465]: E0213 19:59:16.995232 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:16.995512 kubelet[2465]: E0213 19:59:16.995325 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:16.995893 kubelet[2465]: E0213 19:59:16.995567 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:16.996740 containerd[1437]: time="2025-02-13T19:59:16.996654229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:59:18.657647 kubelet[2465]: I0213 19:59:18.656733 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:59:18.657647 kubelet[2465]: E0213 19:59:18.657164 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:18.895517 kubelet[2465]: E0213 19:59:18.895472 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7254x" podUID="968a0298-0e67-414e-9ce9-912c9a8051e6" Feb 13 19:59:18.998033 kubelet[2465]: E0213 19:59:18.997919 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:20.895767 kubelet[2465]: E0213 19:59:20.895723 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7254x" podUID="968a0298-0e67-414e-9ce9-912c9a8051e6" Feb 13 19:59:21.219637 containerd[1437]: time="2025-02-13T19:59:21.219403453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:21.228822 containerd[1437]: time="2025-02-13T19:59:21.228632733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:59:21.229893 containerd[1437]: time="2025-02-13T19:59:21.229865752Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:21.231970 containerd[1437]: time="2025-02-13T19:59:21.231905754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:21.232809 containerd[1437]: time="2025-02-13T19:59:21.232777989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.236080671s" Feb 13 19:59:21.232809 containerd[1437]: time="2025-02-13T19:59:21.232809195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:59:21.234881 containerd[1437]: time="2025-02-13T19:59:21.234846797Z" level=info msg="CreateContainer within sandbox \"47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:59:21.244254 containerd[1437]: time="2025-02-13T19:59:21.244209860Z" level=info msg="CreateContainer within sandbox \"47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172\"" Feb 13 19:59:21.244744 containerd[1437]: time="2025-02-13T19:59:21.244715630Z" level=info msg="StartContainer for \"f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172\"" Feb 13 19:59:21.283712 systemd[1]: Started cri-containerd-f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172.scope - libcontainer container f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172. Feb 13 19:59:21.304243 containerd[1437]: time="2025-02-13T19:59:21.304204439Z" level=info msg="StartContainer for \"f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172\" returns successfully" Feb 13 19:59:21.849106 systemd[1]: cri-containerd-f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172.scope: Deactivated successfully. Feb 13 19:59:21.867636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172-rootfs.mount: Deactivated successfully. Feb 13 19:59:21.891374 kubelet[2465]: I0213 19:59:21.891344 2465 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:59:21.928528 containerd[1437]: time="2025-02-13T19:59:21.928468791Z" level=info msg="shim disconnected" id=f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172 namespace=k8s.io Feb 13 19:59:21.928528 containerd[1437]: time="2025-02-13T19:59:21.928525001Z" level=warning msg="cleaning up after shim disconnected" id=f83721c1f5782addd3609f3d061d6a18efedb42f157c09525cbf9c4e23148172 namespace=k8s.io Feb 13 19:59:21.928528 containerd[1437]: time="2025-02-13T19:59:21.928535083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:21.959125 systemd[1]: Created slice kubepods-burstable-poda5074f3c_5b40_44d9_8dc9_780c1febe27b.slice - libcontainer container kubepods-burstable-poda5074f3c_5b40_44d9_8dc9_780c1febe27b.slice. Feb 13 19:59:21.964459 systemd[1]: Created slice kubepods-besteffort-podbda86074_cc02_4a98_a41e_338364b60d5a.slice - libcontainer container kubepods-besteffort-podbda86074_cc02_4a98_a41e_338364b60d5a.slice. Feb 13 19:59:21.970985 systemd[1]: Created slice kubepods-burstable-podc2596e28_8cb1_4ed6_acba_877fc0496dcf.slice - libcontainer container kubepods-burstable-podc2596e28_8cb1_4ed6_acba_877fc0496dcf.slice. Feb 13 19:59:21.976921 systemd[1]: Created slice kubepods-besteffort-pod77cab5ba_c279_4f9c_8d8d_a9a61221294c.slice - libcontainer container kubepods-besteffort-pod77cab5ba_c279_4f9c_8d8d_a9a61221294c.slice. Feb 13 19:59:21.990065 systemd[1]: Created slice kubepods-besteffort-podc4b7000d_254d_41aa_be0a_008e4b815cae.slice - libcontainer container kubepods-besteffort-podc4b7000d_254d_41aa_be0a_008e4b815cae.slice. Feb 13 19:59:22.006690 kubelet[2465]: E0213 19:59:22.006492 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:22.008493 containerd[1437]: time="2025-02-13T19:59:22.008108134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:59:22.016373 kubelet[2465]: I0213 19:59:22.016137 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2596e28-8cb1-4ed6-acba-877fc0496dcf-config-volume\") pod \"coredns-6f6b679f8f-wptkv\" (UID: \"c2596e28-8cb1-4ed6-acba-877fc0496dcf\") " pod="kube-system/coredns-6f6b679f8f-wptkv" Feb 13 19:59:22.016373 kubelet[2465]: I0213 19:59:22.016178 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda86074-cc02-4a98-a41e-338364b60d5a-tigera-ca-bundle\") pod \"calico-kube-controllers-7bfc48c574-8m478\" (UID: \"bda86074-cc02-4a98-a41e-338364b60d5a\") " pod="calico-system/calico-kube-controllers-7bfc48c574-8m478" Feb 13 19:59:22.016373 kubelet[2465]: I0213 19:59:22.016200 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rhj\" (UniqueName: \"kubernetes.io/projected/a5074f3c-5b40-44d9-8dc9-780c1febe27b-kube-api-access-k8rhj\") pod \"coredns-6f6b679f8f-qq6z6\" (UID: \"a5074f3c-5b40-44d9-8dc9-780c1febe27b\") " pod="kube-system/coredns-6f6b679f8f-qq6z6" Feb 13 19:59:22.016373 kubelet[2465]: I0213 19:59:22.016221 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5074f3c-5b40-44d9-8dc9-780c1febe27b-config-volume\") pod \"coredns-6f6b679f8f-qq6z6\" (UID: \"a5074f3c-5b40-44d9-8dc9-780c1febe27b\") " pod="kube-system/coredns-6f6b679f8f-qq6z6" Feb 13 19:59:22.016373 kubelet[2465]: I0213 19:59:22.016240 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw47f\" (UniqueName: \"kubernetes.io/projected/bda86074-cc02-4a98-a41e-338364b60d5a-kube-api-access-jw47f\") pod \"calico-kube-controllers-7bfc48c574-8m478\" (UID: \"bda86074-cc02-4a98-a41e-338364b60d5a\") " pod="calico-system/calico-kube-controllers-7bfc48c574-8m478" Feb 13 19:59:22.016690 kubelet[2465]: I0213 19:59:22.016668 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6tc\" (UniqueName: \"kubernetes.io/projected/77cab5ba-c279-4f9c-8d8d-a9a61221294c-kube-api-access-2c6tc\") pod \"calico-apiserver-797fd6d4c5-5rnqh\" (UID: \"77cab5ba-c279-4f9c-8d8d-a9a61221294c\") " pod="calico-apiserver/calico-apiserver-797fd6d4c5-5rnqh" Feb 13 19:59:22.016865 kubelet[2465]: I0213 19:59:22.016711 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjh64\" (UniqueName: \"kubernetes.io/projected/c4b7000d-254d-41aa-be0a-008e4b815cae-kube-api-access-qjh64\") pod \"calico-apiserver-797fd6d4c5-52snj\" (UID: \"c4b7000d-254d-41aa-be0a-008e4b815cae\") " pod="calico-apiserver/calico-apiserver-797fd6d4c5-52snj" Feb 13 19:59:22.016865 kubelet[2465]: I0213 19:59:22.016773 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c4b7000d-254d-41aa-be0a-008e4b815cae-calico-apiserver-certs\") pod \"calico-apiserver-797fd6d4c5-52snj\" (UID: \"c4b7000d-254d-41aa-be0a-008e4b815cae\") " pod="calico-apiserver/calico-apiserver-797fd6d4c5-52snj" Feb 13 19:59:22.016918 kubelet[2465]: I0213 19:59:22.016892 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kpqg\" (UniqueName: \"kubernetes.io/projected/c2596e28-8cb1-4ed6-acba-877fc0496dcf-kube-api-access-5kpqg\") pod \"coredns-6f6b679f8f-wptkv\" (UID: \"c2596e28-8cb1-4ed6-acba-877fc0496dcf\") " pod="kube-system/coredns-6f6b679f8f-wptkv" Feb 13 19:59:22.017007 kubelet[2465]: I0213 19:59:22.016943 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77cab5ba-c279-4f9c-8d8d-a9a61221294c-calico-apiserver-certs\") pod \"calico-apiserver-797fd6d4c5-5rnqh\" (UID: \"77cab5ba-c279-4f9c-8d8d-a9a61221294c\") " pod="calico-apiserver/calico-apiserver-797fd6d4c5-5rnqh" Feb 13 19:59:22.262266 kubelet[2465]: E0213 19:59:22.262167 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:22.263728 containerd[1437]: time="2025-02-13T19:59:22.263684953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qq6z6,Uid:a5074f3c-5b40-44d9-8dc9-780c1febe27b,Namespace:kube-system,Attempt:0,}" Feb 13 19:59:22.269066 containerd[1437]: time="2025-02-13T19:59:22.268965613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc48c574-8m478,Uid:bda86074-cc02-4a98-a41e-338364b60d5a,Namespace:calico-system,Attempt:0,}" Feb 13 19:59:22.275563 kubelet[2465]: E0213 19:59:22.275529 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:22.276251 containerd[1437]: time="2025-02-13T19:59:22.275961326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wptkv,Uid:c2596e28-8cb1-4ed6-acba-877fc0496dcf,Namespace:kube-system,Attempt:0,}" Feb 13 19:59:22.281181 containerd[1437]: time="2025-02-13T19:59:22.280682251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-5rnqh,Uid:77cab5ba-c279-4f9c-8d8d-a9a61221294c,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:59:22.292711 containerd[1437]: time="2025-02-13T19:59:22.292678657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-52snj,Uid:c4b7000d-254d-41aa-be0a-008e4b815cae,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:59:22.781641 containerd[1437]: time="2025-02-13T19:59:22.781548975Z" level=error msg="Failed to destroy network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.782187 containerd[1437]: time="2025-02-13T19:59:22.781914437Z" level=error msg="encountered an error cleaning up failed sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.782187 containerd[1437]: time="2025-02-13T19:59:22.781975488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qq6z6,Uid:a5074f3c-5b40-44d9-8dc9-780c1febe27b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.783117 containerd[1437]: time="2025-02-13T19:59:22.783078236Z" level=error msg="Failed to destroy network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.783494 containerd[1437]: time="2025-02-13T19:59:22.783455620Z" level=error msg="encountered an error cleaning up failed sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.783547 containerd[1437]: time="2025-02-13T19:59:22.783522872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-5rnqh,Uid:77cab5ba-c279-4f9c-8d8d-a9a61221294c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.784086 kubelet[2465]: E0213 19:59:22.783911 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.784086 kubelet[2465]: E0213 19:59:22.783989 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797fd6d4c5-5rnqh" Feb 13 19:59:22.784086 kubelet[2465]: E0213 19:59:22.783910 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.784086 kubelet[2465]: E0213 19:59:22.784060 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qq6z6" Feb 13 19:59:22.787362 kubelet[2465]: E0213 19:59:22.787107 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qq6z6" Feb 13 19:59:22.787362 kubelet[2465]: E0213 19:59:22.787188 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qq6z6_kube-system(a5074f3c-5b40-44d9-8dc9-780c1febe27b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qq6z6_kube-system(a5074f3c-5b40-44d9-8dc9-780c1febe27b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qq6z6" podUID="a5074f3c-5b40-44d9-8dc9-780c1febe27b" Feb 13 19:59:22.787622 kubelet[2465]: E0213 19:59:22.787525 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797fd6d4c5-5rnqh" Feb 13 19:59:22.787780 containerd[1437]: time="2025-02-13T19:59:22.787579403Z" level=error msg="Failed to destroy network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.787824 kubelet[2465]: E0213 19:59:22.787731 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797fd6d4c5-5rnqh_calico-apiserver(77cab5ba-c279-4f9c-8d8d-a9a61221294c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797fd6d4c5-5rnqh_calico-apiserver(77cab5ba-c279-4f9c-8d8d-a9a61221294c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797fd6d4c5-5rnqh" podUID="77cab5ba-c279-4f9c-8d8d-a9a61221294c" Feb 13 19:59:22.788013 containerd[1437]: time="2025-02-13T19:59:22.787955547Z" level=error msg="encountered an error cleaning up failed sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.788049 containerd[1437]: time="2025-02-13T19:59:22.788023159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc48c574-8m478,Uid:bda86074-cc02-4a98-a41e-338364b60d5a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.788229 kubelet[2465]: E0213 19:59:22.788197 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.788280 kubelet[2465]: E0213 19:59:22.788240 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfc48c574-8m478" Feb 13 19:59:22.788280 kubelet[2465]: E0213 19:59:22.788259 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfc48c574-8m478" Feb 13 19:59:22.788329 kubelet[2465]: E0213 19:59:22.788288 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bfc48c574-8m478_calico-system(bda86074-cc02-4a98-a41e-338364b60d5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bfc48c574-8m478_calico-system(bda86074-cc02-4a98-a41e-338364b60d5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfc48c574-8m478" podUID="bda86074-cc02-4a98-a41e-338364b60d5a" Feb 13 19:59:22.790702 containerd[1437]: time="2025-02-13T19:59:22.790662049Z" level=error msg="Failed to destroy network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.791020 containerd[1437]: time="2025-02-13T19:59:22.790991145Z" level=error msg="encountered an error cleaning up failed sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.791052 containerd[1437]: time="2025-02-13T19:59:22.791035433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wptkv,Uid:c2596e28-8cb1-4ed6-acba-877fc0496dcf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.791201 kubelet[2465]: E0213 19:59:22.791174 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.791390 kubelet[2465]: E0213 19:59:22.791280 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wptkv" Feb 13 19:59:22.791390 kubelet[2465]: E0213 19:59:22.791303 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wptkv" Feb 13 19:59:22.791390 kubelet[2465]: E0213 19:59:22.791351 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wptkv_kube-system(c2596e28-8cb1-4ed6-acba-877fc0496dcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wptkv_kube-system(c2596e28-8cb1-4ed6-acba-877fc0496dcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wptkv" podUID="c2596e28-8cb1-4ed6-acba-877fc0496dcf" Feb 13 19:59:22.793122 containerd[1437]: time="2025-02-13T19:59:22.793076741Z" level=error msg="Failed to destroy network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.793380 containerd[1437]: time="2025-02-13T19:59:22.793341466Z" level=error msg="encountered an error cleaning up failed sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.793503 containerd[1437]: time="2025-02-13T19:59:22.793404997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-52snj,Uid:c4b7000d-254d-41aa-be0a-008e4b815cae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.793950 kubelet[2465]: E0213 19:59:22.793900 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.796088 kubelet[2465]: E0213 19:59:22.793947 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797fd6d4c5-52snj" Feb 13 19:59:22.796217 kubelet[2465]: E0213 19:59:22.796089 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797fd6d4c5-52snj" Feb 13 19:59:22.796217 kubelet[2465]: E0213 19:59:22.796134 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797fd6d4c5-52snj_calico-apiserver(c4b7000d-254d-41aa-be0a-008e4b815cae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797fd6d4c5-52snj_calico-apiserver(c4b7000d-254d-41aa-be0a-008e4b815cae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797fd6d4c5-52snj" podUID="c4b7000d-254d-41aa-be0a-008e4b815cae" Feb 13 19:59:22.900806 systemd[1]: Created slice kubepods-besteffort-pod968a0298_0e67_414e_9ce9_912c9a8051e6.slice - libcontainer container kubepods-besteffort-pod968a0298_0e67_414e_9ce9_912c9a8051e6.slice. Feb 13 19:59:22.902977 containerd[1437]: time="2025-02-13T19:59:22.902935073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7254x,Uid:968a0298-0e67-414e-9ce9-912c9a8051e6,Namespace:calico-system,Attempt:0,}" Feb 13 19:59:22.950094 containerd[1437]: time="2025-02-13T19:59:22.950040385Z" level=error msg="Failed to destroy network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.950388 containerd[1437]: time="2025-02-13T19:59:22.950360159Z" level=error msg="encountered an error cleaning up failed sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.950456 containerd[1437]: time="2025-02-13T19:59:22.950429331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7254x,Uid:968a0298-0e67-414e-9ce9-912c9a8051e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.954884 kubelet[2465]: E0213 19:59:22.954844 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:22.954972 kubelet[2465]: E0213 19:59:22.954905 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7254x" Feb 13 19:59:22.954972 kubelet[2465]: E0213 19:59:22.954924 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7254x" Feb 13 19:59:22.955026 kubelet[2465]: E0213 19:59:22.954970 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7254x_calico-system(968a0298-0e67-414e-9ce9-912c9a8051e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7254x_calico-system(968a0298-0e67-414e-9ce9-912c9a8051e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7254x" podUID="968a0298-0e67-414e-9ce9-912c9a8051e6" Feb 13 19:59:23.008278 kubelet[2465]: I0213 19:59:23.008247 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 19:59:23.009348 containerd[1437]: time="2025-02-13T19:59:23.008888373Z" level=info msg="StopPodSandbox for \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\"" Feb 13 19:59:23.009348 containerd[1437]: time="2025-02-13T19:59:23.009099688Z" level=info msg="Ensure that sandbox 102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7 in task-service has been cleanup successfully" Feb 13 19:59:23.010102 kubelet[2465]: I0213 19:59:23.010005 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 19:59:23.010537 containerd[1437]: time="2025-02-13T19:59:23.010404222Z" level=info msg="StopPodSandbox for \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\"" Feb 13 19:59:23.010631 containerd[1437]: time="2025-02-13T19:59:23.010543404Z" level=info msg="Ensure that sandbox 9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a in task-service has been cleanup successfully" Feb 13 19:59:23.012613 kubelet[2465]: I0213 19:59:23.012401 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 19:59:23.013033 containerd[1437]: time="2025-02-13T19:59:23.012877827Z" level=info msg="StopPodSandbox for \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\"" Feb 13 19:59:23.013033 containerd[1437]: time="2025-02-13T19:59:23.013015409Z" level=info msg="Ensure that sandbox 82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3 in task-service has been cleanup successfully" Feb 13 19:59:23.015167 kubelet[2465]: I0213 19:59:23.015140 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 19:59:23.015622 containerd[1437]: time="2025-02-13T19:59:23.015573468Z" level=info msg="StopPodSandbox for \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\"" Feb 13 19:59:23.016014 containerd[1437]: time="2025-02-13T19:59:23.015987136Z" level=info msg="Ensure that sandbox e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc in task-service has been cleanup successfully" Feb 13 19:59:23.016324 kubelet[2465]: I0213 19:59:23.016290 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 19:59:23.017142 containerd[1437]: time="2025-02-13T19:59:23.016714535Z" level=info msg="StopPodSandbox for \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\"" Feb 13 19:59:23.017142 containerd[1437]: time="2025-02-13T19:59:23.016851878Z" level=info msg="Ensure that sandbox 34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba in task-service has been cleanup successfully" Feb 13 19:59:23.019328 kubelet[2465]: I0213 19:59:23.019264 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 19:59:23.020044 containerd[1437]: time="2025-02-13T19:59:23.019966548Z" level=info msg="StopPodSandbox for \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\"" Feb 13 19:59:23.023175 containerd[1437]: time="2025-02-13T19:59:23.023138227Z" level=info msg="Ensure that sandbox 2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8 in task-service has been cleanup successfully" Feb 13 19:59:23.067492 containerd[1437]: time="2025-02-13T19:59:23.066633392Z" level=error msg="StopPodSandbox for \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\" failed" error="failed to destroy network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:23.067632 kubelet[2465]: E0213 19:59:23.067286 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 19:59:23.067632 kubelet[2465]: E0213 19:59:23.067382 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7"} Feb 13 19:59:23.067632 kubelet[2465]: E0213 19:59:23.067573 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4b7000d-254d-41aa-be0a-008e4b815cae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:23.067632 kubelet[2465]: E0213 19:59:23.067608 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4b7000d-254d-41aa-be0a-008e4b815cae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797fd6d4c5-52snj" podUID="c4b7000d-254d-41aa-be0a-008e4b815cae" Feb 13 19:59:23.075901 containerd[1437]: time="2025-02-13T19:59:23.075840300Z" level=error msg="StopPodSandbox for \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\" failed" error="failed to destroy network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:23.076181 containerd[1437]: time="2025-02-13T19:59:23.076125947Z" level=error msg="StopPodSandbox for \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\" failed" error="failed to destroy network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:23.076251 kubelet[2465]: E0213 19:59:23.076225 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 19:59:23.076315 kubelet[2465]: E0213 19:59:23.076275 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc"} Feb 13 19:59:23.076315 kubelet[2465]: E0213 19:59:23.076306 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77cab5ba-c279-4f9c-8d8d-a9a61221294c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:23.076404 kubelet[2465]: E0213 19:59:23.076326 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77cab5ba-c279-4f9c-8d8d-a9a61221294c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797fd6d4c5-5rnqh" podUID="77cab5ba-c279-4f9c-8d8d-a9a61221294c" Feb 13 19:59:23.076788 kubelet[2465]: E0213 19:59:23.076756 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 19:59:23.076831 kubelet[2465]: E0213 19:59:23.076794 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3"} Feb 13 19:59:23.076831 kubelet[2465]: E0213 19:59:23.076823 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"968a0298-0e67-414e-9ce9-912c9a8051e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:23.076907 kubelet[2465]: E0213 19:59:23.076870 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"968a0298-0e67-414e-9ce9-912c9a8051e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7254x" podUID="968a0298-0e67-414e-9ce9-912c9a8051e6" Feb 13 19:59:23.078317 containerd[1437]: time="2025-02-13T19:59:23.078284021Z" level=error msg="StopPodSandbox for \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\" failed" error="failed to destroy network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:23.078690 containerd[1437]: time="2025-02-13T19:59:23.078363994Z" level=error msg="StopPodSandbox for \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\" failed" error="failed to destroy network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:23.078935 kubelet[2465]: E0213 19:59:23.078878 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 19:59:23.078998 kubelet[2465]: E0213 19:59:23.078936 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a"} Feb 13 19:59:23.078998 kubelet[2465]: E0213 19:59:23.078944 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 19:59:23.078998 kubelet[2465]: E0213 19:59:23.078988 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8"} Feb 13 19:59:23.079059 kubelet[2465]: E0213 19:59:23.079014 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5074f3c-5b40-44d9-8dc9-780c1febe27b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:23.079059 kubelet[2465]: E0213 19:59:23.079035 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5074f3c-5b40-44d9-8dc9-780c1febe27b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qq6z6" podUID="a5074f3c-5b40-44d9-8dc9-780c1febe27b" Feb 13 19:59:23.079059 kubelet[2465]: E0213 19:59:23.078959 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2596e28-8cb1-4ed6-acba-877fc0496dcf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:23.079165 kubelet[2465]: E0213 19:59:23.079062 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2596e28-8cb1-4ed6-acba-877fc0496dcf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wptkv" podUID="c2596e28-8cb1-4ed6-acba-877fc0496dcf" Feb 13 19:59:23.081270 containerd[1437]: time="2025-02-13T19:59:23.081226423Z" level=error msg="StopPodSandbox for \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\" failed" error="failed to destroy network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:23.081430 kubelet[2465]: E0213 19:59:23.081392 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 19:59:23.081477 kubelet[2465]: E0213 19:59:23.081437 2465 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba"} Feb 13 19:59:23.081477 kubelet[2465]: E0213 19:59:23.081462 2465 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bda86074-cc02-4a98-a41e-338364b60d5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:23.081542 kubelet[2465]: E0213 19:59:23.081481 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bda86074-cc02-4a98-a41e-338364b60d5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfc48c574-8m478" podUID="bda86074-cc02-4a98-a41e-338364b60d5a" Feb 13 19:59:23.243657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a-shm.mount: Deactivated successfully. Feb 13 19:59:23.243752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba-shm.mount: Deactivated successfully. Feb 13 19:59:23.243803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8-shm.mount: Deactivated successfully. Feb 13 19:59:25.947129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908507498.mount: Deactivated successfully. Feb 13 19:59:26.028742 containerd[1437]: time="2025-02-13T19:59:26.028692950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:26.029273 containerd[1437]: time="2025-02-13T19:59:26.029235949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:59:26.030138 containerd[1437]: time="2025-02-13T19:59:26.030099155Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:26.031966 containerd[1437]: time="2025-02-13T19:59:26.031932343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:26.032648 containerd[1437]: time="2025-02-13T19:59:26.032609802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.024446459s" Feb 13 19:59:26.032648 containerd[1437]: time="2025-02-13T19:59:26.032642527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:59:26.048322 containerd[1437]: time="2025-02-13T19:59:26.048283612Z" level=info msg="CreateContainer within sandbox \"47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:59:26.067682 containerd[1437]: time="2025-02-13T19:59:26.067633639Z" level=info msg="CreateContainer within sandbox \"47802bd9a0fbdaed122f7989d03cdde15c1a4c841d803c33c2180718e2cef277\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec49e263c97576fe845844cc9c079ead906dd17f2fcbf28df631941d0d53c5a0\"" Feb 13 19:59:26.068410 containerd[1437]: time="2025-02-13T19:59:26.068153915Z" level=info msg="StartContainer for \"ec49e263c97576fe845844cc9c079ead906dd17f2fcbf28df631941d0d53c5a0\"" Feb 13 19:59:26.118801 systemd[1]: Started cri-containerd-ec49e263c97576fe845844cc9c079ead906dd17f2fcbf28df631941d0d53c5a0.scope - libcontainer container ec49e263c97576fe845844cc9c079ead906dd17f2fcbf28df631941d0d53c5a0. Feb 13 19:59:26.148125 containerd[1437]: time="2025-02-13T19:59:26.147970655Z" level=info msg="StartContainer for \"ec49e263c97576fe845844cc9c079ead906dd17f2fcbf28df631941d0d53c5a0\" returns successfully" Feb 13 19:59:26.311974 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:59:26.312115 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:59:27.034643 kubelet[2465]: E0213 19:59:27.034445 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:27.048447 kubelet[2465]: I0213 19:59:27.048390 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6qx8c" podStartSLOduration=1.766075545 podStartE2EDuration="14.048375547s" podCreationTimestamp="2025-02-13 19:59:13 +0000 UTC" firstStartedPulling="2025-02-13 19:59:13.750932091 +0000 UTC m=+12.931695922" lastFinishedPulling="2025-02-13 19:59:26.033232093 +0000 UTC m=+25.213995924" observedRunningTime="2025-02-13 19:59:27.04797277 +0000 UTC m=+26.228736601" watchObservedRunningTime="2025-02-13 19:59:27.048375547 +0000 UTC m=+26.229139378" Feb 13 19:59:27.713621 kernel: bpftool[3839]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:59:27.866736 systemd-networkd[1378]: vxlan.calico: Link UP Feb 13 19:59:27.866747 systemd-networkd[1378]: vxlan.calico: Gained carrier Feb 13 19:59:28.036081 kubelet[2465]: I0213 19:59:28.035968 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:59:28.036544 kubelet[2465]: E0213 19:59:28.036383 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:29.343703 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Feb 13 19:59:30.914488 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:34834.service - OpenSSH per-connection server daemon (10.0.0.1:34834). Feb 13 19:59:30.955839 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 34834 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:30.957375 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:30.961639 systemd-logind[1419]: New session 8 of user core. Feb 13 19:59:30.970743 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:59:31.167904 sshd[3917]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:31.170853 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:34834.service: Deactivated successfully. Feb 13 19:59:31.172533 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:59:31.173843 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:59:31.174738 systemd-logind[1419]: Removed session 8. Feb 13 19:59:33.895466 containerd[1437]: time="2025-02-13T19:59:33.895425407Z" level=info msg="StopPodSandbox for \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\"" Feb 13 19:59:33.896022 containerd[1437]: time="2025-02-13T19:59:33.895444889Z" level=info msg="StopPodSandbox for \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\"" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.002 [INFO][3973] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.003 [INFO][3973] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" iface="eth0" netns="/var/run/netns/cni-be17fbc2-9575-513b-c90e-d0173328616f" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.003 [INFO][3973] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" iface="eth0" netns="/var/run/netns/cni-be17fbc2-9575-513b-c90e-d0173328616f" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.004 [INFO][3973] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" iface="eth0" netns="/var/run/netns/cni-be17fbc2-9575-513b-c90e-d0173328616f" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.004 [INFO][3973] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.004 [INFO][3973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.128 [INFO][3986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.128 [INFO][3986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.128 [INFO][3986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.137 [WARNING][3986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.137 [INFO][3986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.138 [INFO][3986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:34.142411 containerd[1437]: 2025-02-13 19:59:34.140 [INFO][3973] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 19:59:34.143339 containerd[1437]: time="2025-02-13T19:59:34.143040287Z" level=info msg="TearDown network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\" successfully" Feb 13 19:59:34.143339 containerd[1437]: time="2025-02-13T19:59:34.143069811Z" level=info msg="StopPodSandbox for \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\" returns successfully" Feb 13 19:59:34.144490 containerd[1437]: time="2025-02-13T19:59:34.144467928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-5rnqh,Uid:77cab5ba-c279-4f9c-8d8d-a9a61221294c,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:59:34.145759 systemd[1]: run-netns-cni\x2dbe17fbc2\x2d9575\x2d513b\x2dc90e\x2dd0173328616f.mount: Deactivated successfully. Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.005 [INFO][3972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.005 [INFO][3972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" iface="eth0" netns="/var/run/netns/cni-2fc2c822-f3e2-de8b-2f26-20e65f07b5bb" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.006 [INFO][3972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" iface="eth0" netns="/var/run/netns/cni-2fc2c822-f3e2-de8b-2f26-20e65f07b5bb" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.007 [INFO][3972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" iface="eth0" netns="/var/run/netns/cni-2fc2c822-f3e2-de8b-2f26-20e65f07b5bb" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.008 [INFO][3972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.008 [INFO][3972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.128 [INFO][3987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.128 [INFO][3987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.138 [INFO][3987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.148 [WARNING][3987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.148 [INFO][3987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.149 [INFO][3987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:34.152679 containerd[1437]: 2025-02-13 19:59:34.151 [INFO][3972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 19:59:34.153033 containerd[1437]: time="2025-02-13T19:59:34.152905399Z" level=info msg="TearDown network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\" successfully" Feb 13 19:59:34.153033 containerd[1437]: time="2025-02-13T19:59:34.152927601Z" level=info msg="StopPodSandbox for \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\" returns successfully" Feb 13 19:59:34.153661 containerd[1437]: time="2025-02-13T19:59:34.153485624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7254x,Uid:968a0298-0e67-414e-9ce9-912c9a8051e6,Namespace:calico-system,Attempt:1,}" Feb 13 19:59:34.154470 systemd[1]: run-netns-cni\x2d2fc2c822\x2df3e2\x2dde8b\x2d2f26\x2d20e65f07b5bb.mount: Deactivated successfully. Feb 13 19:59:34.268007 systemd-networkd[1378]: calic44c2f8884b: Link UP Feb 13 19:59:34.268221 systemd-networkd[1378]: calic44c2f8884b: Gained carrier Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.194 [INFO][4003] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0 calico-apiserver-797fd6d4c5- calico-apiserver 77cab5ba-c279-4f9c-8d8d-a9a61221294c 835 0 2025-02-13 19:59:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797fd6d4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-797fd6d4c5-5rnqh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic44c2f8884b [] []}} ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.195 [INFO][4003] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.224 [INFO][4030] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" HandleID="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.241 [INFO][4030] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" HandleID="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d90b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-797fd6d4c5-5rnqh", "timestamp":"2025-02-13 19:59:34.224410854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.241 [INFO][4030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.241 [INFO][4030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.241 [INFO][4030] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.243 [INFO][4030] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.247 [INFO][4030] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.251 [INFO][4030] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.252 [INFO][4030] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.254 [INFO][4030] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.254 [INFO][4030] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.256 [INFO][4030] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280 Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.259 [INFO][4030] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.264 [INFO][4030] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.264 [INFO][4030] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" host="localhost" Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.264 [INFO][4030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:34.280228 containerd[1437]: 2025-02-13 19:59:34.264 [INFO][4030] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" HandleID="k8s-pod-network.5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.280770 containerd[1437]: 2025-02-13 19:59:34.266 [INFO][4003] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"77cab5ba-c279-4f9c-8d8d-a9a61221294c", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-797fd6d4c5-5rnqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic44c2f8884b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:34.280770 containerd[1437]: 2025-02-13 19:59:34.266 [INFO][4003] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.280770 containerd[1437]: 2025-02-13 19:59:34.266 [INFO][4003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic44c2f8884b ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.280770 containerd[1437]: 2025-02-13 19:59:34.268 [INFO][4003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.280770 containerd[1437]: 2025-02-13 19:59:34.268 [INFO][4003] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"77cab5ba-c279-4f9c-8d8d-a9a61221294c", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280", Pod:"calico-apiserver-797fd6d4c5-5rnqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic44c2f8884b", MAC:"d2:f6:b5:eb:6d:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:34.280770 containerd[1437]: 2025-02-13 19:59:34.276 [INFO][4003] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-5rnqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 19:59:34.302812 containerd[1437]: time="2025-02-13T19:59:34.302684791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:34.303631 containerd[1437]: time="2025-02-13T19:59:34.303564890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:34.303631 containerd[1437]: time="2025-02-13T19:59:34.303594414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:34.303751 containerd[1437]: time="2025-02-13T19:59:34.303677103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:34.327745 systemd[1]: Started cri-containerd-5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280.scope - libcontainer container 5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280. Feb 13 19:59:34.337168 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:34.372045 containerd[1437]: time="2025-02-13T19:59:34.371997239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-5rnqh,Uid:77cab5ba-c279-4f9c-8d8d-a9a61221294c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280\"" Feb 13 19:59:34.374024 containerd[1437]: time="2025-02-13T19:59:34.373982783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:59:34.382131 systemd-networkd[1378]: cali95db8e0e3f1: Link UP Feb 13 19:59:34.382789 systemd-networkd[1378]: cali95db8e0e3f1: Gained carrier Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.199 [INFO][4014] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7254x-eth0 csi-node-driver- calico-system 968a0298-0e67-414e-9ce9-912c9a8051e6 836 0 2025-02-13 19:59:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7254x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali95db8e0e3f1 [] []}} ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.199 [INFO][4014] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.225 [INFO][4036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" HandleID="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.241 [INFO][4036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" HandleID="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Workload="localhost-k8s-csi--node--driver--7254x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7254x", "timestamp":"2025-02-13 19:59:34.225169499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.241 [INFO][4036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.264 [INFO][4036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.264 [INFO][4036] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.344 [INFO][4036] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.350 [INFO][4036] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.357 [INFO][4036] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.359 [INFO][4036] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.361 [INFO][4036] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.361 [INFO][4036] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.364 [INFO][4036] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.369 [INFO][4036] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.377 [INFO][4036] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.377 [INFO][4036] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" host="localhost" Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.377 [INFO][4036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:34.393372 containerd[1437]: 2025-02-13 19:59:34.377 [INFO][4036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" HandleID="k8s-pod-network.29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.393894 containerd[1437]: 2025-02-13 19:59:34.380 [INFO][4014] cni-plugin/k8s.go 386: Populated endpoint ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7254x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"968a0298-0e67-414e-9ce9-912c9a8051e6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7254x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali95db8e0e3f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:34.393894 containerd[1437]: 2025-02-13 19:59:34.380 [INFO][4014] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.393894 containerd[1437]: 2025-02-13 19:59:34.380 [INFO][4014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95db8e0e3f1 ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.393894 containerd[1437]: 2025-02-13 19:59:34.381 [INFO][4014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.393894 containerd[1437]: 2025-02-13 19:59:34.381 [INFO][4014] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7254x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"968a0298-0e67-414e-9ce9-912c9a8051e6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e", Pod:"csi-node-driver-7254x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali95db8e0e3f1", MAC:"c6:3c:e3:76:53:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:34.393894 containerd[1437]: 2025-02-13 19:59:34.391 [INFO][4014] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e" Namespace="calico-system" Pod="csi-node-driver-7254x" WorkloadEndpoint="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 19:59:34.408664 containerd[1437]: time="2025-02-13T19:59:34.408498351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:34.408664 containerd[1437]: time="2025-02-13T19:59:34.408571679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:34.408664 containerd[1437]: time="2025-02-13T19:59:34.408602363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:34.408785 containerd[1437]: time="2025-02-13T19:59:34.408709655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:34.428774 systemd[1]: Started cri-containerd-29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e.scope - libcontainer container 29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e. Feb 13 19:59:34.437809 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:34.448041 containerd[1437]: time="2025-02-13T19:59:34.447990200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7254x,Uid:968a0298-0e67-414e-9ce9-912c9a8051e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e\"" Feb 13 19:59:34.896148 containerd[1437]: time="2025-02-13T19:59:34.896077436Z" level=info msg="StopPodSandbox for \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\"" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.939 [INFO][4172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.940 [INFO][4172] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" iface="eth0" netns="/var/run/netns/cni-aca925a3-37a4-90fc-07fa-bce60fee26b9" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.940 [INFO][4172] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" iface="eth0" netns="/var/run/netns/cni-aca925a3-37a4-90fc-07fa-bce60fee26b9" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.940 [INFO][4172] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" iface="eth0" netns="/var/run/netns/cni-aca925a3-37a4-90fc-07fa-bce60fee26b9" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.940 [INFO][4172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.940 [INFO][4172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.958 [INFO][4179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.958 [INFO][4179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.958 [INFO][4179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.966 [WARNING][4179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.966 [INFO][4179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.967 [INFO][4179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:34.970109 containerd[1437]: 2025-02-13 19:59:34.968 [INFO][4172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 19:59:34.970632 containerd[1437]: time="2025-02-13T19:59:34.970599711Z" level=info msg="TearDown network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\" successfully" Feb 13 19:59:34.970632 containerd[1437]: time="2025-02-13T19:59:34.970628754Z" level=info msg="StopPodSandbox for \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\" returns successfully" Feb 13 19:59:34.971268 containerd[1437]: time="2025-02-13T19:59:34.971235623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-52snj,Uid:c4b7000d-254d-41aa-be0a-008e4b815cae,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:59:35.075788 systemd-networkd[1378]: cali6c7df8b2baa: Link UP Feb 13 19:59:35.076386 systemd-networkd[1378]: cali6c7df8b2baa: Gained carrier Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.011 [INFO][4188] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0 calico-apiserver-797fd6d4c5- calico-apiserver c4b7000d-254d-41aa-be0a-008e4b815cae 853 0 2025-02-13 19:59:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797fd6d4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-797fd6d4c5-52snj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6c7df8b2baa [] []}} ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.011 [INFO][4188] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.036 [INFO][4201] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" HandleID="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.046 [INFO][4201] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" HandleID="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137b90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-797fd6d4c5-52snj", "timestamp":"2025-02-13 19:59:35.036415534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.046 [INFO][4201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.046 [INFO][4201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.046 [INFO][4201] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.048 [INFO][4201] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.051 [INFO][4201] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.055 [INFO][4201] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.057 [INFO][4201] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.059 [INFO][4201] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.059 [INFO][4201] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.060 [INFO][4201] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190 Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.065 [INFO][4201] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.070 [INFO][4201] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.070 [INFO][4201] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" host="localhost" Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.070 [INFO][4201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:35.086660 containerd[1437]: 2025-02-13 19:59:35.070 [INFO][4201] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" HandleID="k8s-pod-network.317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:35.087158 containerd[1437]: 2025-02-13 19:59:35.072 [INFO][4188] cni-plugin/k8s.go 386: Populated endpoint ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4b7000d-254d-41aa-be0a-008e4b815cae", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-797fd6d4c5-52snj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c7df8b2baa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:35.087158 containerd[1437]: 2025-02-13 19:59:35.073 [INFO][4188] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:35.087158 containerd[1437]: 2025-02-13 19:59:35.073 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c7df8b2baa ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:35.087158 containerd[1437]: 2025-02-13 19:59:35.076 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:35.087158 containerd[1437]: 2025-02-13 19:59:35.076 [INFO][4188] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4b7000d-254d-41aa-be0a-008e4b815cae", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190", Pod:"calico-apiserver-797fd6d4c5-52snj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c7df8b2baa", MAC:"f2:8e:bb:f9:6d:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:35.087158 containerd[1437]: 2025-02-13 19:59:35.084 [INFO][4188] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190" Namespace="calico-apiserver" Pod="calico-apiserver-797fd6d4c5-52snj" WorkloadEndpoint="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 19:59:35.109322 containerd[1437]: time="2025-02-13T19:59:35.108846030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:35.109322 containerd[1437]: time="2025-02-13T19:59:35.109221191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:35.109322 containerd[1437]: time="2025-02-13T19:59:35.109234512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:35.109480 containerd[1437]: time="2025-02-13T19:59:35.109315401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:35.125758 systemd[1]: Started cri-containerd-317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190.scope - libcontainer container 317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190. Feb 13 19:59:35.135011 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:35.150445 systemd[1]: run-netns-cni\x2daca925a3\x2d37a4\x2d90fc\x2d07fa\x2dbce60fee26b9.mount: Deactivated successfully. Feb 13 19:59:35.154179 containerd[1437]: time="2025-02-13T19:59:35.154140272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797fd6d4c5-52snj,Uid:c4b7000d-254d-41aa-be0a-008e4b815cae,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190\"" Feb 13 19:59:35.552012 systemd-networkd[1378]: calic44c2f8884b: Gained IPv6LL Feb 13 19:59:36.069465 containerd[1437]: time="2025-02-13T19:59:36.069412746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:36.069864 containerd[1437]: time="2025-02-13T19:59:36.069792547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 19:59:36.070737 containerd[1437]: time="2025-02-13T19:59:36.070699764Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:36.073207 containerd[1437]: time="2025-02-13T19:59:36.073165867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:36.073654 containerd[1437]: time="2025-02-13T19:59:36.073620315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.699411547s" Feb 13 19:59:36.073696 containerd[1437]: time="2025-02-13T19:59:36.073652799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 19:59:36.075278 containerd[1437]: time="2025-02-13T19:59:36.075249209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:59:36.076190 containerd[1437]: time="2025-02-13T19:59:36.076147705Z" level=info msg="CreateContainer within sandbox \"5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:59:36.089068 containerd[1437]: time="2025-02-13T19:59:36.089012477Z" level=info msg="CreateContainer within sandbox \"5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6a9834211f1a908ade8117f5ab6397d27e786b094cd8ec56ef1b8f4fef1b6b23\"" Feb 13 19:59:36.089710 containerd[1437]: time="2025-02-13T19:59:36.089667107Z" level=info msg="StartContainer for \"6a9834211f1a908ade8117f5ab6397d27e786b094cd8ec56ef1b8f4fef1b6b23\"" Feb 13 19:59:36.141757 systemd[1]: Started cri-containerd-6a9834211f1a908ade8117f5ab6397d27e786b094cd8ec56ef1b8f4fef1b6b23.scope - libcontainer container 6a9834211f1a908ade8117f5ab6397d27e786b094cd8ec56ef1b8f4fef1b6b23. Feb 13 19:59:36.145232 systemd[1]: run-containerd-runc-k8s.io-6a9834211f1a908ade8117f5ab6397d27e786b094cd8ec56ef1b8f4fef1b6b23-runc.5jIzRf.mount: Deactivated successfully. Feb 13 19:59:36.169482 containerd[1437]: time="2025-02-13T19:59:36.169416373Z" level=info msg="StartContainer for \"6a9834211f1a908ade8117f5ab6397d27e786b094cd8ec56ef1b8f4fef1b6b23\" returns successfully" Feb 13 19:59:36.178419 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:57622.service - OpenSSH per-connection server daemon (10.0.0.1:57622). Feb 13 19:59:36.193812 systemd-networkd[1378]: cali95db8e0e3f1: Gained IPv6LL Feb 13 19:59:36.235700 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 57622 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:36.236286 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:36.243162 systemd-logind[1419]: New session 9 of user core. Feb 13 19:59:36.249603 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:59:36.454631 sshd[4307]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:36.457701 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:57622.service: Deactivated successfully. Feb 13 19:59:36.459331 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:59:36.460232 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:59:36.461615 systemd-logind[1419]: Removed session 9. Feb 13 19:59:36.639824 systemd-networkd[1378]: cali6c7df8b2baa: Gained IPv6LL Feb 13 19:59:36.898831 containerd[1437]: time="2025-02-13T19:59:36.898788325Z" level=info msg="StopPodSandbox for \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\"" Feb 13 19:59:36.904392 containerd[1437]: time="2025-02-13T19:59:36.904336677Z" level=info msg="StopPodSandbox for \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\"" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.953 [INFO][4365] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.954 [INFO][4365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" iface="eth0" netns="/var/run/netns/cni-3ea4ba76-2e89-8903-98b1-38696f653aa1" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.954 [INFO][4365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" iface="eth0" netns="/var/run/netns/cni-3ea4ba76-2e89-8903-98b1-38696f653aa1" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.954 [INFO][4365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" iface="eth0" netns="/var/run/netns/cni-3ea4ba76-2e89-8903-98b1-38696f653aa1" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.954 [INFO][4365] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.954 [INFO][4365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.976 [INFO][4376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.976 [INFO][4376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.976 [INFO][4376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.985 [WARNING][4376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.985 [INFO][4376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.986 [INFO][4376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:36.991303 containerd[1437]: 2025-02-13 19:59:36.988 [INFO][4365] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 19:59:36.994781 containerd[1437]: time="2025-02-13T19:59:36.991380041Z" level=info msg="TearDown network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\" successfully" Feb 13 19:59:36.994781 containerd[1437]: time="2025-02-13T19:59:36.991832209Z" level=info msg="StopPodSandbox for \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\" returns successfully" Feb 13 19:59:36.994887 kubelet[2465]: E0213 19:59:36.993289 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:36.995135 containerd[1437]: time="2025-02-13T19:59:36.994954902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wptkv,Uid:c2596e28-8cb1-4ed6-acba-877fc0496dcf,Namespace:kube-system,Attempt:1,}" Feb 13 19:59:36.995259 systemd[1]: run-netns-cni\x2d3ea4ba76\x2d2e89\x2d8903\x2d98b1\x2d38696f653aa1.mount: Deactivated successfully. Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.954 [INFO][4351] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.954 [INFO][4351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" iface="eth0" netns="/var/run/netns/cni-db361ce9-d07d-5696-594d-0b4f7afde7cc" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.955 [INFO][4351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" iface="eth0" netns="/var/run/netns/cni-db361ce9-d07d-5696-594d-0b4f7afde7cc" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.955 [INFO][4351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" iface="eth0" netns="/var/run/netns/cni-db361ce9-d07d-5696-594d-0b4f7afde7cc" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.955 [INFO][4351] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.955 [INFO][4351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.980 [INFO][4377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.981 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:36.987 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:37.011 [WARNING][4377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:37.011 [INFO][4377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:37.017 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:37.021900 containerd[1437]: 2025-02-13 19:59:37.019 [INFO][4351] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 19:59:37.022461 containerd[1437]: time="2025-02-13T19:59:37.022002651Z" level=info msg="TearDown network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\" successfully" Feb 13 19:59:37.022461 containerd[1437]: time="2025-02-13T19:59:37.022026734Z" level=info msg="StopPodSandbox for \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\" returns successfully" Feb 13 19:59:37.023898 kubelet[2465]: E0213 19:59:37.023872 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:37.024596 containerd[1437]: time="2025-02-13T19:59:37.024559557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qq6z6,Uid:a5074f3c-5b40-44d9-8dc9-780c1febe27b,Namespace:kube-system,Attempt:1,}" Feb 13 19:59:37.024992 systemd[1]: run-netns-cni\x2ddb361ce9\x2dd07d\x2d5696\x2d594d\x2d0b4f7afde7cc.mount: Deactivated successfully. Feb 13 19:59:37.080165 kubelet[2465]: I0213 19:59:37.080110 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797fd6d4c5-5rnqh" podStartSLOduration=22.379333436 podStartE2EDuration="24.07991215s" podCreationTimestamp="2025-02-13 19:59:13 +0000 UTC" firstStartedPulling="2025-02-13 19:59:34.373723394 +0000 UTC m=+33.554487225" lastFinishedPulling="2025-02-13 19:59:36.074302148 +0000 UTC m=+35.255065939" observedRunningTime="2025-02-13 19:59:37.077958027 +0000 UTC m=+36.258721898" watchObservedRunningTime="2025-02-13 19:59:37.07991215 +0000 UTC m=+36.260675981" Feb 13 19:59:37.192697 systemd-networkd[1378]: cali8e9f0aa3866: Link UP Feb 13 19:59:37.193276 systemd-networkd[1378]: cali8e9f0aa3866: Gained carrier Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.067 [INFO][4394] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--wptkv-eth0 coredns-6f6b679f8f- kube-system c2596e28-8cb1-4ed6-acba-877fc0496dcf 880 0 2025-02-13 19:59:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-wptkv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8e9f0aa3866 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.067 [INFO][4394] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.117 [INFO][4419] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" HandleID="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.137 [INFO][4419] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" HandleID="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f5580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-wptkv", "timestamp":"2025-02-13 19:59:37.117945144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.137 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.137 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.137 [INFO][4419] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.139 [INFO][4419] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.148 [INFO][4419] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.153 [INFO][4419] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.155 [INFO][4419] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.158 [INFO][4419] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.158 [INFO][4419] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.160 [INFO][4419] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95 Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.176 [INFO][4419] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.188 [INFO][4419] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.188 [INFO][4419] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" host="localhost" Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.188 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:37.207048 containerd[1437]: 2025-02-13 19:59:37.188 [INFO][4419] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" HandleID="k8s-pod-network.c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:37.208332 containerd[1437]: 2025-02-13 19:59:37.190 [INFO][4394] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wptkv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2596e28-8cb1-4ed6-acba-877fc0496dcf", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-wptkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e9f0aa3866", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:37.208332 containerd[1437]: 2025-02-13 19:59:37.190 [INFO][4394] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:37.208332 containerd[1437]: 2025-02-13 19:59:37.190 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e9f0aa3866 ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:37.208332 containerd[1437]: 2025-02-13 19:59:37.193 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:37.208332 containerd[1437]: 2025-02-13 19:59:37.194 [INFO][4394] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wptkv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2596e28-8cb1-4ed6-acba-877fc0496dcf", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95", Pod:"coredns-6f6b679f8f-wptkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e9f0aa3866", MAC:"02:fd:85:80:d3:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:37.208332 containerd[1437]: 2025-02-13 19:59:37.205 [INFO][4394] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95" Namespace="kube-system" Pod="coredns-6f6b679f8f-wptkv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 19:59:37.230101 containerd[1437]: time="2025-02-13T19:59:37.230039234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:37.230238 containerd[1437]: time="2025-02-13T19:59:37.230084439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:37.230238 containerd[1437]: time="2025-02-13T19:59:37.230094840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:37.230238 containerd[1437]: time="2025-02-13T19:59:37.230156447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:37.268740 systemd[1]: Started cri-containerd-c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95.scope - libcontainer container c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95. Feb 13 19:59:37.285774 systemd-networkd[1378]: cali39f6b2136e5: Link UP Feb 13 19:59:37.285922 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:37.286007 systemd-networkd[1378]: cali39f6b2136e5: Gained carrier Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.106 [INFO][4407] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0 coredns-6f6b679f8f- kube-system a5074f3c-5b40-44d9-8dc9-780c1febe27b 879 0 2025-02-13 19:59:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-qq6z6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali39f6b2136e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.106 [INFO][4407] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.155 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" HandleID="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.239 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" HandleID="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8c20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-qq6z6", "timestamp":"2025-02-13 19:59:37.155574575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.239 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.240 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.240 [INFO][4430] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.242 [INFO][4430] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.248 [INFO][4430] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.254 [INFO][4430] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.256 [INFO][4430] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.259 [INFO][4430] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.259 [INFO][4430] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.261 [INFO][4430] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.266 [INFO][4430] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.273 [INFO][4430] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.273 [INFO][4430] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" host="localhost" Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.273 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:37.313002 containerd[1437]: 2025-02-13 19:59:37.273 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" HandleID="k8s-pod-network.4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.313810 containerd[1437]: 2025-02-13 19:59:37.281 [INFO][4407] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a5074f3c-5b40-44d9-8dc9-780c1febe27b", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-qq6z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39f6b2136e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:37.313810 containerd[1437]: 2025-02-13 19:59:37.281 [INFO][4407] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.313810 containerd[1437]: 2025-02-13 19:59:37.281 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39f6b2136e5 ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.313810 containerd[1437]: 2025-02-13 19:59:37.284 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.313810 containerd[1437]: 2025-02-13 19:59:37.285 [INFO][4407] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a5074f3c-5b40-44d9-8dc9-780c1febe27b", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee", Pod:"coredns-6f6b679f8f-qq6z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39f6b2136e5", MAC:"92:49:37:e1:65:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:37.313810 containerd[1437]: 2025-02-13 19:59:37.306 [INFO][4407] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee" Namespace="kube-system" Pod="coredns-6f6b679f8f-qq6z6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 19:59:37.335094 containerd[1437]: time="2025-02-13T19:59:37.335041428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wptkv,Uid:c2596e28-8cb1-4ed6-acba-877fc0496dcf,Namespace:kube-system,Attempt:1,} returns sandbox id \"c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95\"" Feb 13 19:59:37.336134 kubelet[2465]: E0213 19:59:37.335900 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:37.341774 containerd[1437]: time="2025-02-13T19:59:37.338197876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:37.341774 containerd[1437]: time="2025-02-13T19:59:37.341688839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:37.341872 containerd[1437]: time="2025-02-13T19:59:37.341769928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:37.341944 containerd[1437]: time="2025-02-13T19:59:37.341911102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:37.344048 containerd[1437]: time="2025-02-13T19:59:37.343936833Z" level=info msg="CreateContainer within sandbox \"c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:59:37.364278 containerd[1437]: time="2025-02-13T19:59:37.364235343Z" level=info msg="CreateContainer within sandbox \"c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb94dfcb709ab88029707d756d81188e16072b6b063f2414aaf5ef512014a40c\"" Feb 13 19:59:37.365033 containerd[1437]: time="2025-02-13T19:59:37.365001342Z" level=info msg="StartContainer for \"bb94dfcb709ab88029707d756d81188e16072b6b063f2414aaf5ef512014a40c\"" Feb 13 19:59:37.367756 systemd[1]: Started cri-containerd-4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee.scope - libcontainer container 4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee. Feb 13 19:59:37.370673 containerd[1437]: time="2025-02-13T19:59:37.370641368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:37.371392 containerd[1437]: time="2025-02-13T19:59:37.371351562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:59:37.372038 containerd[1437]: time="2025-02-13T19:59:37.372003950Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:37.376104 containerd[1437]: time="2025-02-13T19:59:37.376071293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:37.379541 containerd[1437]: time="2025-02-13T19:59:37.379502049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.304217757s" Feb 13 19:59:37.379620 containerd[1437]: time="2025-02-13T19:59:37.379541894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:59:37.380649 containerd[1437]: time="2025-02-13T19:59:37.380556519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:59:37.381565 containerd[1437]: time="2025-02-13T19:59:37.381449132Z" level=info msg="CreateContainer within sandbox \"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:59:37.383833 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:37.397983 containerd[1437]: time="2025-02-13T19:59:37.397942206Z" level=info msg="CreateContainer within sandbox \"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4654da1de3f1ccd345b71aa27384d948c905d86ea174edc6e82f6b36fd713610\"" Feb 13 19:59:37.400494 containerd[1437]: time="2025-02-13T19:59:37.400464668Z" level=info msg="StartContainer for \"4654da1de3f1ccd345b71aa27384d948c905d86ea174edc6e82f6b36fd713610\"" Feb 13 19:59:37.402772 systemd[1]: Started cri-containerd-bb94dfcb709ab88029707d756d81188e16072b6b063f2414aaf5ef512014a40c.scope - libcontainer container bb94dfcb709ab88029707d756d81188e16072b6b063f2414aaf5ef512014a40c. Feb 13 19:59:37.409686 containerd[1437]: time="2025-02-13T19:59:37.409645983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qq6z6,Uid:a5074f3c-5b40-44d9-8dc9-780c1febe27b,Namespace:kube-system,Attempt:1,} returns sandbox id \"4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee\"" Feb 13 19:59:37.410505 kubelet[2465]: E0213 19:59:37.410400 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:37.412219 kubelet[2465]: I0213 19:59:37.411656 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:59:37.412291 containerd[1437]: time="2025-02-13T19:59:37.412074155Z" level=info msg="CreateContainer within sandbox \"4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:59:37.413324 kubelet[2465]: E0213 19:59:37.413269 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:37.450776 systemd[1]: Started cri-containerd-4654da1de3f1ccd345b71aa27384d948c905d86ea174edc6e82f6b36fd713610.scope - libcontainer container 4654da1de3f1ccd345b71aa27384d948c905d86ea174edc6e82f6b36fd713610. Feb 13 19:59:37.459200 containerd[1437]: time="2025-02-13T19:59:37.459112044Z" level=info msg="StartContainer for \"bb94dfcb709ab88029707d756d81188e16072b6b063f2414aaf5ef512014a40c\" returns successfully" Feb 13 19:59:37.509135 containerd[1437]: time="2025-02-13T19:59:37.509089479Z" level=info msg="StartContainer for \"4654da1de3f1ccd345b71aa27384d948c905d86ea174edc6e82f6b36fd713610\" returns successfully" Feb 13 19:59:37.522501 containerd[1437]: time="2025-02-13T19:59:37.522344416Z" level=info msg="CreateContainer within sandbox \"4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a534dddd4b6105baf8eca231db1c961bb98e3f0cbe85cd1e2a7560ba90188ef2\"" Feb 13 19:59:37.523469 containerd[1437]: time="2025-02-13T19:59:37.523430169Z" level=info msg="StartContainer for \"a534dddd4b6105baf8eca231db1c961bb98e3f0cbe85cd1e2a7560ba90188ef2\"" Feb 13 19:59:37.563065 systemd[1]: Started cri-containerd-a534dddd4b6105baf8eca231db1c961bb98e3f0cbe85cd1e2a7560ba90188ef2.scope - libcontainer container a534dddd4b6105baf8eca231db1c961bb98e3f0cbe85cd1e2a7560ba90188ef2. Feb 13 19:59:37.605156 containerd[1437]: time="2025-02-13T19:59:37.605104498Z" level=info msg="StartContainer for \"a534dddd4b6105baf8eca231db1c961bb98e3f0cbe85cd1e2a7560ba90188ef2\" returns successfully" Feb 13 19:59:37.689688 containerd[1437]: time="2025-02-13T19:59:37.689643525Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:37.691000 containerd[1437]: time="2025-02-13T19:59:37.690971303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:59:37.695590 containerd[1437]: time="2025-02-13T19:59:37.692485701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 311.898299ms" Feb 13 19:59:37.695590 containerd[1437]: time="2025-02-13T19:59:37.692522024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 19:59:37.698699 containerd[1437]: time="2025-02-13T19:59:37.698671984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:59:37.700069 containerd[1437]: time="2025-02-13T19:59:37.700040126Z" level=info msg="CreateContainer within sandbox \"317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:59:37.714756 containerd[1437]: time="2025-02-13T19:59:37.713932770Z" level=info msg="CreateContainer within sandbox \"317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d2f21c8a6f0bfc2a536f4ea5d5d35d2b1792f2b32b4c0aab4cfb17162bbe84a\"" Feb 13 19:59:37.714853 containerd[1437]: time="2025-02-13T19:59:37.714807461Z" level=info msg="StartContainer for \"4d2f21c8a6f0bfc2a536f4ea5d5d35d2b1792f2b32b4c0aab4cfb17162bbe84a\"" Feb 13 19:59:37.743756 systemd[1]: Started cri-containerd-4d2f21c8a6f0bfc2a536f4ea5d5d35d2b1792f2b32b4c0aab4cfb17162bbe84a.scope - libcontainer container 4d2f21c8a6f0bfc2a536f4ea5d5d35d2b1792f2b32b4c0aab4cfb17162bbe84a. Feb 13 19:59:37.787956 containerd[1437]: time="2025-02-13T19:59:37.787912579Z" level=info msg="StartContainer for \"4d2f21c8a6f0bfc2a536f4ea5d5d35d2b1792f2b32b4c0aab4cfb17162bbe84a\" returns successfully" Feb 13 19:59:37.895903 containerd[1437]: time="2025-02-13T19:59:37.895863479Z" level=info msg="StopPodSandbox for \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\"" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.963 [INFO][4764] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.963 [INFO][4764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" iface="eth0" netns="/var/run/netns/cni-4f68ee83-8c7f-77af-5607-10034f68250e" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.964 [INFO][4764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" iface="eth0" netns="/var/run/netns/cni-4f68ee83-8c7f-77af-5607-10034f68250e" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.965 [INFO][4764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" iface="eth0" netns="/var/run/netns/cni-4f68ee83-8c7f-77af-5607-10034f68250e" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.965 [INFO][4764] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.965 [INFO][4764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.990 [INFO][4772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.990 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.990 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.998 [WARNING][4772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:37.998 [INFO][4772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:38.000 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:38.003389 containerd[1437]: 2025-02-13 19:59:38.001 [INFO][4764] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 19:59:38.004221 containerd[1437]: time="2025-02-13T19:59:38.003471339Z" level=info msg="TearDown network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\" successfully" Feb 13 19:59:38.004221 containerd[1437]: time="2025-02-13T19:59:38.003496742Z" level=info msg="StopPodSandbox for \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\" returns successfully" Feb 13 19:59:38.004221 containerd[1437]: time="2025-02-13T19:59:38.004086762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc48c574-8m478,Uid:bda86074-cc02-4a98-a41e-338364b60d5a,Namespace:calico-system,Attempt:1,}" Feb 13 19:59:38.079405 kubelet[2465]: E0213 19:59:38.079343 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:38.094315 kubelet[2465]: I0213 19:59:38.094241 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qq6z6" podStartSLOduration=32.094226141 podStartE2EDuration="32.094226141s" podCreationTimestamp="2025-02-13 19:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:59:38.093803218 +0000 UTC m=+37.274567049" watchObservedRunningTime="2025-02-13 19:59:38.094226141 +0000 UTC m=+37.274989972" Feb 13 19:59:38.106417 kubelet[2465]: I0213 19:59:38.106037 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:59:38.106417 kubelet[2465]: E0213 19:59:38.106390 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:38.108670 kubelet[2465]: E0213 19:59:38.108313 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:38.109094 kubelet[2465]: I0213 19:59:38.108929 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797fd6d4c5-52snj" podStartSLOduration=22.566282686 podStartE2EDuration="25.108919031s" podCreationTimestamp="2025-02-13 19:59:13 +0000 UTC" firstStartedPulling="2025-02-13 19:59:35.1558587 +0000 UTC m=+34.336622531" lastFinishedPulling="2025-02-13 19:59:37.698495045 +0000 UTC m=+36.879258876" observedRunningTime="2025-02-13 19:59:38.108638042 +0000 UTC m=+37.289401873" watchObservedRunningTime="2025-02-13 19:59:38.108919031 +0000 UTC m=+37.289682862" Feb 13 19:59:38.138195 kubelet[2465]: I0213 19:59:38.137771 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wptkv" podStartSLOduration=32.137754114 podStartE2EDuration="32.137754114s" podCreationTimestamp="2025-02-13 19:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:59:38.13742072 +0000 UTC m=+37.318184551" watchObservedRunningTime="2025-02-13 19:59:38.137754114 +0000 UTC m=+37.318517945" Feb 13 19:59:38.151529 systemd[1]: run-netns-cni\x2d4f68ee83\x2d8c7f\x2d77af\x2d5607\x2d10034f68250e.mount: Deactivated successfully. Feb 13 19:59:38.257484 systemd-networkd[1378]: calicc9259a73af: Link UP Feb 13 19:59:38.258515 systemd-networkd[1378]: calicc9259a73af: Gained carrier Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.057 [INFO][4781] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0 calico-kube-controllers-7bfc48c574- calico-system bda86074-cc02-4a98-a41e-338364b60d5a 912 0 2025-02-13 19:59:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bfc48c574 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bfc48c574-8m478 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc9259a73af [] []}} ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.057 [INFO][4781] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.090 [INFO][4796] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" HandleID="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.203 [INFO][4796] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" HandleID="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027ad40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bfc48c574-8m478", "timestamp":"2025-02-13 19:59:38.090615935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.203 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.203 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.203 [INFO][4796] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.206 [INFO][4796] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.209 [INFO][4796] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.213 [INFO][4796] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.214 [INFO][4796] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.216 [INFO][4796] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.216 [INFO][4796] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.218 [INFO][4796] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.244 [INFO][4796] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.251 [INFO][4796] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.251 [INFO][4796] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" host="localhost" Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.251 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:38.278030 containerd[1437]: 2025-02-13 19:59:38.251 [INFO][4796] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" HandleID="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.279739 containerd[1437]: 2025-02-13 19:59:38.254 [INFO][4781] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0", GenerateName:"calico-kube-controllers-7bfc48c574-", Namespace:"calico-system", SelfLink:"", UID:"bda86074-cc02-4a98-a41e-338364b60d5a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc48c574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bfc48c574-8m478", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc9259a73af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:38.279739 containerd[1437]: 2025-02-13 19:59:38.254 [INFO][4781] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.279739 containerd[1437]: 2025-02-13 19:59:38.254 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc9259a73af ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.279739 containerd[1437]: 2025-02-13 19:59:38.257 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.279739 containerd[1437]: 2025-02-13 19:59:38.258 [INFO][4781] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0", GenerateName:"calico-kube-controllers-7bfc48c574-", Namespace:"calico-system", SelfLink:"", UID:"bda86074-cc02-4a98-a41e-338364b60d5a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc48c574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e", Pod:"calico-kube-controllers-7bfc48c574-8m478", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc9259a73af", MAC:"46:28:78:93:b6:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:38.279739 containerd[1437]: 2025-02-13 19:59:38.272 [INFO][4781] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc48c574-8m478" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 19:59:38.295057 containerd[1437]: time="2025-02-13T19:59:38.294775355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:38.295057 containerd[1437]: time="2025-02-13T19:59:38.294839601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:38.295057 containerd[1437]: time="2025-02-13T19:59:38.294854963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:38.295057 containerd[1437]: time="2025-02-13T19:59:38.294938611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:38.322797 systemd[1]: Started cri-containerd-20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e.scope - libcontainer container 20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e. Feb 13 19:59:38.335774 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:38.360438 containerd[1437]: time="2025-02-13T19:59:38.360400289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc48c574-8m478,Uid:bda86074-cc02-4a98-a41e-338364b60d5a,Namespace:calico-system,Attempt:1,} returns sandbox id \"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e\"" Feb 13 19:59:38.879701 systemd-networkd[1378]: cali8e9f0aa3866: Gained IPv6LL Feb 13 19:59:38.928641 containerd[1437]: time="2025-02-13T19:59:38.928539092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:38.929240 containerd[1437]: time="2025-02-13T19:59:38.929213241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:59:38.932096 containerd[1437]: time="2025-02-13T19:59:38.932039327Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:38.934639 containerd[1437]: time="2025-02-13T19:59:38.934602707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:38.935352 containerd[1437]: time="2025-02-13T19:59:38.935317060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.236610633s" Feb 13 19:59:38.935406 containerd[1437]: time="2025-02-13T19:59:38.935360064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:59:38.937036 containerd[1437]: time="2025-02-13T19:59:38.936450815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:59:38.939447 containerd[1437]: time="2025-02-13T19:59:38.939099323Z" level=info msg="CreateContainer within sandbox \"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:59:38.963356 containerd[1437]: time="2025-02-13T19:59:38.963305977Z" level=info msg="CreateContainer within sandbox \"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"96ca7f7433abb4419b1ed1f2b14da0af34617eb7b1cd17e1cde6a1a028545f7b\"" Feb 13 19:59:38.963879 containerd[1437]: time="2025-02-13T19:59:38.963813069Z" level=info msg="StartContainer for \"96ca7f7433abb4419b1ed1f2b14da0af34617eb7b1cd17e1cde6a1a028545f7b\"" Feb 13 19:59:39.000555 systemd[1]: Started cri-containerd-96ca7f7433abb4419b1ed1f2b14da0af34617eb7b1cd17e1cde6a1a028545f7b.scope - libcontainer container 96ca7f7433abb4419b1ed1f2b14da0af34617eb7b1cd17e1cde6a1a028545f7b. Feb 13 19:59:39.054577 containerd[1437]: time="2025-02-13T19:59:39.054536821Z" level=info msg="StartContainer for \"96ca7f7433abb4419b1ed1f2b14da0af34617eb7b1cd17e1cde6a1a028545f7b\" returns successfully" Feb 13 19:59:39.071772 systemd-networkd[1378]: cali39f6b2136e5: Gained IPv6LL Feb 13 19:59:39.111285 kubelet[2465]: E0213 19:59:39.111257 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:39.111626 kubelet[2465]: E0213 19:59:39.111553 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:39.972179 kubelet[2465]: I0213 19:59:39.972071 2465 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:59:39.973936 kubelet[2465]: I0213 19:59:39.973648 2465 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:59:40.115331 kubelet[2465]: E0213 19:59:40.114975 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:40.288843 systemd-networkd[1378]: calicc9259a73af: Gained IPv6LL Feb 13 19:59:40.485226 containerd[1437]: time="2025-02-13T19:59:40.485176533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:40.486278 containerd[1437]: time="2025-02-13T19:59:40.485945928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 19:59:40.487436 containerd[1437]: time="2025-02-13T19:59:40.487005390Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:40.489621 containerd[1437]: time="2025-02-13T19:59:40.489068110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:40.490395 containerd[1437]: time="2025-02-13T19:59:40.490364115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.553882737s" Feb 13 19:59:40.490573 containerd[1437]: time="2025-02-13T19:59:40.490462405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 19:59:40.504084 containerd[1437]: time="2025-02-13T19:59:40.503966231Z" level=info msg="CreateContainer within sandbox \"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:59:40.515624 containerd[1437]: time="2025-02-13T19:59:40.515519069Z" level=info msg="CreateContainer within sandbox \"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd\"" Feb 13 19:59:40.516150 containerd[1437]: time="2025-02-13T19:59:40.516116327Z" level=info msg="StartContainer for \"c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd\"" Feb 13 19:59:40.547800 systemd[1]: Started cri-containerd-c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd.scope - libcontainer container c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd. Feb 13 19:59:40.580761 containerd[1437]: time="2025-02-13T19:59:40.580712137Z" level=info msg="StartContainer for \"c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd\" returns successfully" Feb 13 19:59:41.129225 kubelet[2465]: I0213 19:59:41.128781 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bfc48c574-8m478" podStartSLOduration=25.99923584 podStartE2EDuration="28.128763819s" podCreationTimestamp="2025-02-13 19:59:13 +0000 UTC" firstStartedPulling="2025-02-13 19:59:38.361630653 +0000 UTC m=+37.542394484" lastFinishedPulling="2025-02-13 19:59:40.491158632 +0000 UTC m=+39.671922463" observedRunningTime="2025-02-13 19:59:41.127921259 +0000 UTC m=+40.308685090" watchObservedRunningTime="2025-02-13 19:59:41.128763819 +0000 UTC m=+40.309527650" Feb 13 19:59:41.129995 kubelet[2465]: I0213 19:59:41.129928 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7254x" podStartSLOduration=23.642799022 podStartE2EDuration="28.129914888s" podCreationTimestamp="2025-02-13 19:59:13 +0000 UTC" firstStartedPulling="2025-02-13 19:59:34.449111286 +0000 UTC m=+33.629875077" lastFinishedPulling="2025-02-13 19:59:38.936227112 +0000 UTC m=+38.116990943" observedRunningTime="2025-02-13 19:59:39.121183939 +0000 UTC m=+38.301947770" watchObservedRunningTime="2025-02-13 19:59:41.129914888 +0000 UTC m=+40.310678719" Feb 13 19:59:41.468429 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:57630.service - OpenSSH per-connection server daemon (10.0.0.1:57630). Feb 13 19:59:41.521295 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 57630 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:41.522896 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:41.526703 systemd-logind[1419]: New session 10 of user core. Feb 13 19:59:41.538755 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:59:41.733387 sshd[4982]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:41.745932 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:57630.service: Deactivated successfully. Feb 13 19:59:41.747452 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:59:41.748206 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:59:41.754838 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:57642.service - OpenSSH per-connection server daemon (10.0.0.1:57642). Feb 13 19:59:41.755723 systemd-logind[1419]: Removed session 10. Feb 13 19:59:41.786861 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 57642 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:41.788142 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:41.792330 systemd-logind[1419]: New session 11 of user core. Feb 13 19:59:41.803740 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:59:42.010012 sshd[5006]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:42.019259 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:57642.service: Deactivated successfully. Feb 13 19:59:42.022990 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:59:42.025679 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:59:42.037671 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:57654.service - OpenSSH per-connection server daemon (10.0.0.1:57654). Feb 13 19:59:42.038958 systemd-logind[1419]: Removed session 11. Feb 13 19:59:42.072160 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 57654 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:42.073518 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:42.078430 systemd-logind[1419]: New session 12 of user core. Feb 13 19:59:42.091775 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:59:42.276004 sshd[5018]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:42.279580 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:57654.service: Deactivated successfully. Feb 13 19:59:42.283726 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:59:42.286166 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:59:42.287956 systemd-logind[1419]: Removed session 12. Feb 13 19:59:47.288262 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:51762.service - OpenSSH per-connection server daemon (10.0.0.1:51762). Feb 13 19:59:47.323844 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 51762 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:47.325016 sshd[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:47.328825 systemd-logind[1419]: New session 13 of user core. Feb 13 19:59:47.336824 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:59:47.495182 sshd[5039]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:47.499921 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:59:47.500211 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:51762.service: Deactivated successfully. Feb 13 19:59:47.502550 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:59:47.505714 systemd-logind[1419]: Removed session 13. Feb 13 19:59:52.505201 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:53500.service - OpenSSH per-connection server daemon (10.0.0.1:53500). Feb 13 19:59:52.546724 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 53500 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:52.547986 sshd[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:52.551665 systemd-logind[1419]: New session 14 of user core. Feb 13 19:59:52.562735 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:59:52.735799 sshd[5082]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:52.748139 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:53500.service: Deactivated successfully. Feb 13 19:59:52.751029 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:59:52.752187 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:59:52.759892 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:53504.service - OpenSSH per-connection server daemon (10.0.0.1:53504). Feb 13 19:59:52.761392 systemd-logind[1419]: Removed session 14. Feb 13 19:59:52.792704 sshd[5096]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:52.793975 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:52.798383 systemd-logind[1419]: New session 15 of user core. Feb 13 19:59:52.807718 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:59:53.008440 sshd[5096]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:53.018207 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:53504.service: Deactivated successfully. Feb 13 19:59:53.019906 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:59:53.021779 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:59:53.026919 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:53506.service - OpenSSH per-connection server daemon (10.0.0.1:53506). Feb 13 19:59:53.028341 systemd-logind[1419]: Removed session 15. Feb 13 19:59:53.063501 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 53506 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:53.064778 sshd[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:53.069088 systemd-logind[1419]: New session 16 of user core. Feb 13 19:59:53.080806 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:59:54.394639 sshd[5109]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:54.404335 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:53506.service: Deactivated successfully. Feb 13 19:59:54.408797 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:59:54.412859 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:59:54.421262 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:53516.service - OpenSSH per-connection server daemon (10.0.0.1:53516). Feb 13 19:59:54.422192 systemd-logind[1419]: Removed session 16. Feb 13 19:59:54.454870 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 53516 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:54.456079 sshd[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:54.459759 systemd-logind[1419]: New session 17 of user core. Feb 13 19:59:54.471739 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:59:54.813553 sshd[5129]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:54.823990 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:53516.service: Deactivated successfully. Feb 13 19:59:54.826217 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:59:54.828370 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:59:54.830315 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:53530.service - OpenSSH per-connection server daemon (10.0.0.1:53530). Feb 13 19:59:54.832219 systemd-logind[1419]: Removed session 17. Feb 13 19:59:54.867334 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 53530 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:54.868550 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:54.872548 systemd-logind[1419]: New session 18 of user core. Feb 13 19:59:54.883733 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:59:55.032795 sshd[5142]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:55.035296 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:53530.service: Deactivated successfully. Feb 13 19:59:55.037054 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:59:55.038356 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:59:55.039892 systemd-logind[1419]: Removed session 18. Feb 13 20:00:00.043151 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:53536.service - OpenSSH per-connection server daemon (10.0.0.1:53536). Feb 13 20:00:00.078309 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 53536 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:00.079554 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:00.083186 systemd-logind[1419]: New session 19 of user core. Feb 13 20:00:00.092705 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:00:00.215654 sshd[5161]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:00.218773 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:53536.service: Deactivated successfully. Feb 13 20:00:00.220466 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:00:00.221656 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:00:00.222525 systemd-logind[1419]: Removed session 19. Feb 13 20:00:00.899995 containerd[1437]: time="2025-02-13T20:00:00.899921042Z" level=info msg="StopPodSandbox for \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\"" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.934 [WARNING][5192] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0", GenerateName:"calico-kube-controllers-7bfc48c574-", Namespace:"calico-system", SelfLink:"", UID:"bda86074-cc02-4a98-a41e-338364b60d5a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc48c574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e", Pod:"calico-kube-controllers-7bfc48c574-8m478", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc9259a73af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.935 [INFO][5192] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.935 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" iface="eth0" netns="" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.935 [INFO][5192] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.935 [INFO][5192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.952 [INFO][5199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.953 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.953 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.960 [WARNING][5199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.960 [INFO][5199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.962 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:00.964949 containerd[1437]: 2025-02-13 20:00:00.963 [INFO][5192] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:00.965345 containerd[1437]: time="2025-02-13T20:00:00.964979793Z" level=info msg="TearDown network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\" successfully" Feb 13 20:00:00.965345 containerd[1437]: time="2025-02-13T20:00:00.965004275Z" level=info msg="StopPodSandbox for \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\" returns successfully" Feb 13 20:00:00.965490 containerd[1437]: time="2025-02-13T20:00:00.965457747Z" level=info msg="RemovePodSandbox for \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\"" Feb 13 20:00:00.968473 containerd[1437]: time="2025-02-13T20:00:00.968431523Z" level=info msg="Forcibly stopping sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\"" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.000 [WARNING][5221] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0", GenerateName:"calico-kube-controllers-7bfc48c574-", Namespace:"calico-system", SelfLink:"", UID:"bda86074-cc02-4a98-a41e-338364b60d5a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc48c574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e", Pod:"calico-kube-controllers-7bfc48c574-8m478", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc9259a73af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.000 [INFO][5221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.000 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" iface="eth0" netns="" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.000 [INFO][5221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.000 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.017 [INFO][5229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.018 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.018 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.025 [WARNING][5229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.025 [INFO][5229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" HandleID="k8s-pod-network.34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.027 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.029893 containerd[1437]: 2025-02-13 20:00:01.028 [INFO][5221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba" Feb 13 20:00:01.030277 containerd[1437]: time="2025-02-13T20:00:01.029927439Z" level=info msg="TearDown network for sandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\" successfully" Feb 13 20:00:01.090475 containerd[1437]: time="2025-02-13T20:00:01.090412383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:00:01.090609 containerd[1437]: time="2025-02-13T20:00:01.090515311Z" level=info msg="RemovePodSandbox \"34fe52d9b8084a04512957b3c3e1b40550d29f03c27d2471d684f2946d6d3cba\" returns successfully" Feb 13 20:00:01.091057 containerd[1437]: time="2025-02-13T20:00:01.091032228Z" level=info msg="StopPodSandbox for \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\"" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.121 [WARNING][5252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7254x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"968a0298-0e67-414e-9ce9-912c9a8051e6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e", Pod:"csi-node-driver-7254x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali95db8e0e3f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.122 [INFO][5252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.122 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" iface="eth0" netns="" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.122 [INFO][5252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.122 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.139 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.139 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.139 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.147 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.147 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.148 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.151643 containerd[1437]: 2025-02-13 20:00:01.150 [INFO][5252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.151643 containerd[1437]: time="2025-02-13T20:00:01.151600018Z" level=info msg="TearDown network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\" successfully" Feb 13 20:00:01.151643 containerd[1437]: time="2025-02-13T20:00:01.151625420Z" level=info msg="StopPodSandbox for \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\" returns successfully" Feb 13 20:00:01.152076 containerd[1437]: time="2025-02-13T20:00:01.152034690Z" level=info msg="RemovePodSandbox for \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\"" Feb 13 20:00:01.152076 containerd[1437]: time="2025-02-13T20:00:01.152060692Z" level=info msg="Forcibly stopping sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\"" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.184 [WARNING][5282] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7254x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"968a0298-0e67-414e-9ce9-912c9a8051e6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29631b1ad78cb4f77cd2e1f31fdb350e29a260ef075a6f5aa88e0c58ff48e42e", Pod:"csi-node-driver-7254x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali95db8e0e3f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.184 [INFO][5282] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.184 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" iface="eth0" netns="" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.184 [INFO][5282] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.184 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.201 [INFO][5289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.201 [INFO][5289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.201 [INFO][5289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.209 [WARNING][5289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.209 [INFO][5289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" HandleID="k8s-pod-network.82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Workload="localhost-k8s-csi--node--driver--7254x-eth0" Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.210 [INFO][5289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.213091 containerd[1437]: 2025-02-13 20:00:01.211 [INFO][5282] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3" Feb 13 20:00:01.213504 containerd[1437]: time="2025-02-13T20:00:01.213121677Z" level=info msg="TearDown network for sandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\" successfully" Feb 13 20:00:01.215709 containerd[1437]: time="2025-02-13T20:00:01.215673621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:00:01.215762 containerd[1437]: time="2025-02-13T20:00:01.215747706Z" level=info msg="RemovePodSandbox \"82e8f346aa25374924968bd6179ca8ee4c1a7a2d8e69caab32d17927620e1fd3\" returns successfully" Feb 13 20:00:01.216422 containerd[1437]: time="2025-02-13T20:00:01.216175017Z" level=info msg="StopPodSandbox for \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\"" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.246 [WARNING][5312] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"77cab5ba-c279-4f9c-8d8d-a9a61221294c", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280", Pod:"calico-apiserver-797fd6d4c5-5rnqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic44c2f8884b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.246 [INFO][5312] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.246 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" iface="eth0" netns="" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.246 [INFO][5312] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.246 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.263 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.264 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.264 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.271 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.271 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.272 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.275620 containerd[1437]: 2025-02-13 20:00:01.274 [INFO][5312] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.276733 containerd[1437]: time="2025-02-13T20:00:01.275659249Z" level=info msg="TearDown network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\" successfully" Feb 13 20:00:01.276733 containerd[1437]: time="2025-02-13T20:00:01.275683291Z" level=info msg="StopPodSandbox for \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\" returns successfully" Feb 13 20:00:01.276733 containerd[1437]: time="2025-02-13T20:00:01.276190368Z" level=info msg="RemovePodSandbox for \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\"" Feb 13 20:00:01.276733 containerd[1437]: time="2025-02-13T20:00:01.276222330Z" level=info msg="Forcibly stopping sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\"" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.307 [WARNING][5344] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"77cab5ba-c279-4f9c-8d8d-a9a61221294c", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e6223e2fa2464051c6eba14f96c0d8b5f37ccb51cce1a873108b1f57143a280", Pod:"calico-apiserver-797fd6d4c5-5rnqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic44c2f8884b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.307 [INFO][5344] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.307 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" iface="eth0" netns="" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.307 [INFO][5344] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.307 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.324 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.324 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.324 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.332 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.332 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" HandleID="k8s-pod-network.e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--5rnqh-eth0" Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.333 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.336503 containerd[1437]: 2025-02-13 20:00:01.335 [INFO][5344] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc" Feb 13 20:00:01.336897 containerd[1437]: time="2025-02-13T20:00:01.336609267Z" level=info msg="TearDown network for sandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\" successfully" Feb 13 20:00:01.340443 containerd[1437]: time="2025-02-13T20:00:01.340409820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:00:01.340553 containerd[1437]: time="2025-02-13T20:00:01.340467024Z" level=info msg="RemovePodSandbox \"e9d6e390bc22809abb17f847c41fbdae051af67a41e76520b5ec092cded447bc\" returns successfully" Feb 13 20:00:01.341219 containerd[1437]: time="2025-02-13T20:00:01.340939258Z" level=info msg="StopPodSandbox for \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\"" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.372 [WARNING][5373] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4b7000d-254d-41aa-be0a-008e4b815cae", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190", Pod:"calico-apiserver-797fd6d4c5-52snj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c7df8b2baa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.372 [INFO][5373] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.372 [INFO][5373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" iface="eth0" netns="" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.372 [INFO][5373] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.372 [INFO][5373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.390 [INFO][5380] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.390 [INFO][5380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.390 [INFO][5380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.398 [WARNING][5380] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.398 [INFO][5380] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.399 [INFO][5380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.402699 containerd[1437]: 2025-02-13 20:00:01.401 [INFO][5373] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.403351 containerd[1437]: time="2025-02-13T20:00:01.403206331Z" level=info msg="TearDown network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\" successfully" Feb 13 20:00:01.403351 containerd[1437]: time="2025-02-13T20:00:01.403242493Z" level=info msg="StopPodSandbox for \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\" returns successfully" Feb 13 20:00:01.403793 containerd[1437]: time="2025-02-13T20:00:01.403714447Z" level=info msg="RemovePodSandbox for \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\"" Feb 13 20:00:01.403793 containerd[1437]: time="2025-02-13T20:00:01.403742289Z" level=info msg="Forcibly stopping sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\"" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.435 [WARNING][5403] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0", GenerateName:"calico-apiserver-797fd6d4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4b7000d-254d-41aa-be0a-008e4b815cae", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797fd6d4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"317cafe87261a9b04b236c3c3ceebdde6ee35546ee6b344f01ccef849bd97190", Pod:"calico-apiserver-797fd6d4c5-52snj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c7df8b2baa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.435 [INFO][5403] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.435 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" iface="eth0" netns="" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.435 [INFO][5403] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.435 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.452 [INFO][5412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.452 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.452 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.460 [WARNING][5412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.460 [INFO][5412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" HandleID="k8s-pod-network.102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Workload="localhost-k8s-calico--apiserver--797fd6d4c5--52snj-eth0" Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.461 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.464637 containerd[1437]: 2025-02-13 20:00:01.463 [INFO][5403] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7" Feb 13 20:00:01.465024 containerd[1437]: time="2025-02-13T20:00:01.464652464Z" level=info msg="TearDown network for sandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\" successfully" Feb 13 20:00:01.467243 containerd[1437]: time="2025-02-13T20:00:01.467207488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:00:01.467318 containerd[1437]: time="2025-02-13T20:00:01.467270652Z" level=info msg="RemovePodSandbox \"102f1d29810c34b56e860df2278f1f97c9c0da48917f6fbca5bd7f915fa3b2d7\" returns successfully" Feb 13 20:00:01.467722 containerd[1437]: time="2025-02-13T20:00:01.467697163Z" level=info msg="StopPodSandbox for \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\"" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.499 [WARNING][5435] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wptkv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2596e28-8cb1-4ed6-acba-877fc0496dcf", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95", Pod:"coredns-6f6b679f8f-wptkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e9f0aa3866", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.499 [INFO][5435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.499 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" iface="eth0" netns="" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.499 [INFO][5435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.499 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.516 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.516 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.516 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.524 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.524 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.526 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.528943 containerd[1437]: 2025-02-13 20:00:01.527 [INFO][5435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.529336 containerd[1437]: time="2025-02-13T20:00:01.528979845Z" level=info msg="TearDown network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\" successfully" Feb 13 20:00:01.529336 containerd[1437]: time="2025-02-13T20:00:01.529003807Z" level=info msg="StopPodSandbox for \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\" returns successfully" Feb 13 20:00:01.529473 containerd[1437]: time="2025-02-13T20:00:01.529443038Z" level=info msg="RemovePodSandbox for \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\"" Feb 13 20:00:01.529510 containerd[1437]: time="2025-02-13T20:00:01.529474040Z" level=info msg="Forcibly stopping sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\"" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.560 [WARNING][5467] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wptkv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2596e28-8cb1-4ed6-acba-877fc0496dcf", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6408ca276988086673aa6797f6904b84de018f55827d511c29ba52fb9e65c95", Pod:"coredns-6f6b679f8f-wptkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e9f0aa3866", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.561 [INFO][5467] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.561 [INFO][5467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" iface="eth0" netns="" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.561 [INFO][5467] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.561 [INFO][5467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.578 [INFO][5474] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.578 [INFO][5474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.578 [INFO][5474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.586 [WARNING][5474] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.586 [INFO][5474] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" HandleID="k8s-pod-network.9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Workload="localhost-k8s-coredns--6f6b679f8f--wptkv-eth0" Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.587 [INFO][5474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.590256 containerd[1437]: 2025-02-13 20:00:01.589 [INFO][5467] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a" Feb 13 20:00:01.590685 containerd[1437]: time="2025-02-13T20:00:01.590303610Z" level=info msg="TearDown network for sandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\" successfully" Feb 13 20:00:01.592961 containerd[1437]: time="2025-02-13T20:00:01.592929038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:00:01.593036 containerd[1437]: time="2025-02-13T20:00:01.592991363Z" level=info msg="RemovePodSandbox \"9b9e4cbc856759638e424943420cf6059a8dc4cd63537b1152b8497f1006384a\" returns successfully" Feb 13 20:00:01.593605 containerd[1437]: time="2025-02-13T20:00:01.593464277Z" level=info msg="StopPodSandbox for \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\"" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.625 [WARNING][5496] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a5074f3c-5b40-44d9-8dc9-780c1febe27b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee", Pod:"coredns-6f6b679f8f-qq6z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39f6b2136e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.625 [INFO][5496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.625 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" iface="eth0" netns="" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.625 [INFO][5496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.625 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.643 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.643 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.643 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.650 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.650 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.651 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.654812 containerd[1437]: 2025-02-13 20:00:01.653 [INFO][5496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.654812 containerd[1437]: time="2025-02-13T20:00:01.654795562Z" level=info msg="TearDown network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\" successfully" Feb 13 20:00:01.655185 containerd[1437]: time="2025-02-13T20:00:01.654819804Z" level=info msg="StopPodSandbox for \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\" returns successfully" Feb 13 20:00:01.656120 containerd[1437]: time="2025-02-13T20:00:01.655907202Z" level=info msg="RemovePodSandbox for \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\"" Feb 13 20:00:01.656152 containerd[1437]: time="2025-02-13T20:00:01.656130138Z" level=info msg="Forcibly stopping sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\"" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.687 [WARNING][5527] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a5074f3c-5b40-44d9-8dc9-780c1febe27b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bc6976ae87e0436d079f9c63fc5f94a122c0ab02c4a387a394d4c13beeffbee", Pod:"coredns-6f6b679f8f-qq6z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39f6b2136e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.687 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.687 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" iface="eth0" netns="" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.687 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.687 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.705 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.705 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.705 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.713 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.713 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" HandleID="k8s-pod-network.2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Workload="localhost-k8s-coredns--6f6b679f8f--qq6z6-eth0" Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.714 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:01.717723 containerd[1437]: 2025-02-13 20:00:01.716 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8" Feb 13 20:00:01.718097 containerd[1437]: time="2025-02-13T20:00:01.717780646Z" level=info msg="TearDown network for sandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\" successfully" Feb 13 20:00:01.720542 containerd[1437]: time="2025-02-13T20:00:01.720506362Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:00:01.720600 containerd[1437]: time="2025-02-13T20:00:01.720564206Z" level=info msg="RemovePodSandbox \"2629995f9683ee1ea8e1bf96905dc73cbf3d591640921e6230d6979acb3beee8\" returns successfully" Feb 13 20:00:03.001867 systemd[1]: run-containerd-runc-k8s.io-c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd-runc.upJzEC.mount: Deactivated successfully. Feb 13 20:00:04.830056 containerd[1437]: time="2025-02-13T20:00:04.829886169Z" level=info msg="StopContainer for \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\" with timeout 300 (s)" Feb 13 20:00:04.830921 containerd[1437]: time="2025-02-13T20:00:04.830396657Z" level=info msg="Stop container \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\" with signal terminated" Feb 13 20:00:04.912700 containerd[1437]: time="2025-02-13T20:00:04.912660797Z" level=info msg="StopContainer for \"c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd\" with timeout 30 (s)" Feb 13 20:00:04.914097 containerd[1437]: time="2025-02-13T20:00:04.914037589Z" level=info msg="Stop container \"c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd\" with signal terminated" Feb 13 20:00:04.936946 systemd[1]: cri-containerd-c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd.scope: Deactivated successfully. Feb 13 20:00:04.964384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd-rootfs.mount: Deactivated successfully. Feb 13 20:00:04.983126 containerd[1437]: time="2025-02-13T20:00:04.978087845Z" level=info msg="shim disconnected" id=c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd namespace=k8s.io Feb 13 20:00:04.983126 containerd[1437]: time="2025-02-13T20:00:04.982928338Z" level=warning msg="cleaning up after shim disconnected" id=c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd namespace=k8s.io Feb 13 20:00:04.983126 containerd[1437]: time="2025-02-13T20:00:04.982943217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:00:05.013552 containerd[1437]: time="2025-02-13T20:00:05.013510359Z" level=info msg="StopContainer for \"c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd\" returns successfully" Feb 13 20:00:05.014739 containerd[1437]: time="2025-02-13T20:00:05.014715287Z" level=info msg="StopPodSandbox for \"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e\"" Feb 13 20:00:05.014894 containerd[1437]: time="2025-02-13T20:00:05.014856198Z" level=info msg="Container to stop \"c37fc5b5ec47b087d92d584f106f547ff75cddb592e60be3b0aa8975ab66aafd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:00:05.018608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e-shm.mount: Deactivated successfully. Feb 13 20:00:05.025951 systemd[1]: cri-containerd-20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e.scope: Deactivated successfully. Feb 13 20:00:05.053372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e-rootfs.mount: Deactivated successfully. Feb 13 20:00:05.071037 containerd[1437]: time="2025-02-13T20:00:05.070980393Z" level=info msg="shim disconnected" id=20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e namespace=k8s.io Feb 13 20:00:05.071037 containerd[1437]: time="2025-02-13T20:00:05.071033390Z" level=warning msg="cleaning up after shim disconnected" id=20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e namespace=k8s.io Feb 13 20:00:05.071037 containerd[1437]: time="2025-02-13T20:00:05.071043230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:00:05.153018 systemd-networkd[1378]: calicc9259a73af: Link DOWN Feb 13 20:00:05.153025 systemd-networkd[1378]: calicc9259a73af: Lost carrier Feb 13 20:00:05.173700 kubelet[2465]: I0213 20:00:05.173661 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Feb 13 20:00:05.224708 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:40910.service - OpenSSH per-connection server daemon (10.0.0.1:40910). Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.151 [INFO][5648] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.151 [INFO][5648] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" iface="eth0" netns="/var/run/netns/cni-a4d2a3ad-accf-ff50-7505-587030e81f6d" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.152 [INFO][5648] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" iface="eth0" netns="/var/run/netns/cni-a4d2a3ad-accf-ff50-7505-587030e81f6d" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.164 [INFO][5648] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" after=12.715678ms iface="eth0" netns="/var/run/netns/cni-a4d2a3ad-accf-ff50-7505-587030e81f6d" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.164 [INFO][5648] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.164 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.188 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" HandleID="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.188 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.188 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.228 [INFO][5662] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" HandleID="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.229 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" HandleID="k8s-pod-network.20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Workload="localhost-k8s-calico--kube--controllers--7bfc48c574--8m478-eth0" Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.232 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:05.239850 containerd[1437]: 2025-02-13 20:00:05.236 [INFO][5648] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e" Feb 13 20:00:05.242544 containerd[1437]: time="2025-02-13T20:00:05.242307802Z" level=info msg="TearDown network for sandbox \"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e\" successfully" Feb 13 20:00:05.242544 containerd[1437]: time="2025-02-13T20:00:05.242542268Z" level=info msg="StopPodSandbox for \"20072b0b7c4a90edd4ccafe86f10b045d2440b6ca7f5982fd9808f0e344ab16e\" returns successfully" Feb 13 20:00:05.243884 systemd[1]: run-netns-cni\x2da4d2a3ad\x2daccf\x2dff50\x2d7505\x2d587030e81f6d.mount: Deactivated successfully. Feb 13 20:00:05.262740 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 40910 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:05.264162 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:05.273773 systemd-logind[1419]: New session 20 of user core. Feb 13 20:00:05.278776 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:00:05.387196 kubelet[2465]: I0213 20:00:05.387153 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda86074-cc02-4a98-a41e-338364b60d5a-tigera-ca-bundle\") pod \"bda86074-cc02-4a98-a41e-338364b60d5a\" (UID: \"bda86074-cc02-4a98-a41e-338364b60d5a\") " Feb 13 20:00:05.387324 kubelet[2465]: I0213 20:00:05.387213 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw47f\" (UniqueName: \"kubernetes.io/projected/bda86074-cc02-4a98-a41e-338364b60d5a-kube-api-access-jw47f\") pod \"bda86074-cc02-4a98-a41e-338364b60d5a\" (UID: \"bda86074-cc02-4a98-a41e-338364b60d5a\") " Feb 13 20:00:05.397888 systemd[1]: var-lib-kubelet-pods-bda86074\x2dcc02\x2d4a98\x2da41e\x2d338364b60d5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djw47f.mount: Deactivated successfully. Feb 13 20:00:05.401437 kubelet[2465]: I0213 20:00:05.401343 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda86074-cc02-4a98-a41e-338364b60d5a-kube-api-access-jw47f" (OuterVolumeSpecName: "kube-api-access-jw47f") pod "bda86074-cc02-4a98-a41e-338364b60d5a" (UID: "bda86074-cc02-4a98-a41e-338364b60d5a"). InnerVolumeSpecName "kube-api-access-jw47f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:00:05.401437 kubelet[2465]: I0213 20:00:05.401343 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda86074-cc02-4a98-a41e-338364b60d5a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "bda86074-cc02-4a98-a41e-338364b60d5a" (UID: "bda86074-cc02-4a98-a41e-338364b60d5a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:00:05.410625 sshd[5672]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:05.415369 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:40910.service: Deactivated successfully. Feb 13 20:00:05.419491 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:00:05.421456 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:00:05.422605 systemd-logind[1419]: Removed session 20. Feb 13 20:00:05.488401 kubelet[2465]: I0213 20:00:05.488355 2465 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jw47f\" (UniqueName: \"kubernetes.io/projected/bda86074-cc02-4a98-a41e-338364b60d5a-kube-api-access-jw47f\") on node \"localhost\" DevicePath \"\"" Feb 13 20:00:05.488570 kubelet[2465]: I0213 20:00:05.488549 2465 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda86074-cc02-4a98-a41e-338364b60d5a-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Feb 13 20:00:05.962628 systemd[1]: var-lib-kubelet-pods-bda86074\x2dcc02\x2d4a98\x2da41e\x2d338364b60d5a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Feb 13 20:00:06.179642 systemd[1]: Removed slice kubepods-besteffort-podbda86074_cc02_4a98_a41e_338364b60d5a.slice - libcontainer container kubepods-besteffort-podbda86074_cc02_4a98_a41e_338364b60d5a.slice. Feb 13 20:00:06.214794 kubelet[2465]: E0213 20:00:06.212932 2465 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bda86074-cc02-4a98-a41e-338364b60d5a" containerName="calico-kube-controllers" Feb 13 20:00:06.214794 kubelet[2465]: I0213 20:00:06.213001 2465 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda86074-cc02-4a98-a41e-338364b60d5a" containerName="calico-kube-controllers" Feb 13 20:00:06.227027 systemd[1]: Created slice kubepods-besteffort-pod84ad6e0c_7334_4084_832e_ec9bda26d933.slice - libcontainer container kubepods-besteffort-pod84ad6e0c_7334_4084_832e_ec9bda26d933.slice. Feb 13 20:00:06.393744 kubelet[2465]: I0213 20:00:06.393691 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84ad6e0c-7334-4084-832e-ec9bda26d933-tigera-ca-bundle\") pod \"calico-kube-controllers-667d8cbdd6-b7skv\" (UID: \"84ad6e0c-7334-4084-832e-ec9bda26d933\") " pod="calico-system/calico-kube-controllers-667d8cbdd6-b7skv" Feb 13 20:00:06.393744 kubelet[2465]: I0213 20:00:06.393741 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlvfg\" (UniqueName: \"kubernetes.io/projected/84ad6e0c-7334-4084-832e-ec9bda26d933-kube-api-access-zlvfg\") pod \"calico-kube-controllers-667d8cbdd6-b7skv\" (UID: \"84ad6e0c-7334-4084-832e-ec9bda26d933\") " pod="calico-system/calico-kube-controllers-667d8cbdd6-b7skv" Feb 13 20:00:06.533697 containerd[1437]: time="2025-02-13T20:00:06.533396318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667d8cbdd6-b7skv,Uid:84ad6e0c-7334-4084-832e-ec9bda26d933,Namespace:calico-system,Attempt:0,}" Feb 13 20:00:06.638706 systemd-networkd[1378]: cali4011afdb7d6: Link UP Feb 13 20:00:06.638909 systemd-networkd[1378]: cali4011afdb7d6: Gained carrier Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.572 [INFO][5701] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0 calico-kube-controllers-667d8cbdd6- calico-system 84ad6e0c-7334-4084-832e-ec9bda26d933 1234 0 2025-02-13 20:00:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:667d8cbdd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-667d8cbdd6-b7skv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4011afdb7d6 [] []}} ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.572 [INFO][5701] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.598 [INFO][5715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" HandleID="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Workload="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.608 [INFO][5715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" HandleID="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Workload="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ff8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-667d8cbdd6-b7skv", "timestamp":"2025-02-13 20:00:06.598136297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.608 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.608 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.608 [INFO][5715] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.610 [INFO][5715] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.615 [INFO][5715] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.619 [INFO][5715] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.620 [INFO][5715] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.623 [INFO][5715] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.623 [INFO][5715] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.626 [INFO][5715] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4 Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.629 [INFO][5715] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.634 [INFO][5715] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.634 [INFO][5715] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" host="localhost" Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.635 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:06.651196 containerd[1437]: 2025-02-13 20:00:06.635 [INFO][5715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" HandleID="k8s-pod-network.c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Workload="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" Feb 13 20:00:06.651723 containerd[1437]: 2025-02-13 20:00:06.637 [INFO][5701] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0", GenerateName:"calico-kube-controllers-667d8cbdd6-", Namespace:"calico-system", SelfLink:"", UID:"84ad6e0c-7334-4084-832e-ec9bda26d933", ResourceVersion:"1234", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"667d8cbdd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-667d8cbdd6-b7skv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4011afdb7d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:06.651723 containerd[1437]: 2025-02-13 20:00:06.637 [INFO][5701] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" Feb 13 20:00:06.651723 containerd[1437]: 2025-02-13 20:00:06.637 [INFO][5701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4011afdb7d6 ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" Feb 13 20:00:06.651723 containerd[1437]: 2025-02-13 20:00:06.639 [INFO][5701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" Feb 13 20:00:06.651723 containerd[1437]: 2025-02-13 20:00:06.639 [INFO][5701] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0", GenerateName:"calico-kube-controllers-667d8cbdd6-", Namespace:"calico-system", SelfLink:"", UID:"84ad6e0c-7334-4084-832e-ec9bda26d933", ResourceVersion:"1234", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"667d8cbdd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4", Pod:"calico-kube-controllers-667d8cbdd6-b7skv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4011afdb7d6", MAC:"b6:3c:ab:01:d0:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:06.651723 containerd[1437]: 2025-02-13 20:00:06.647 [INFO][5701] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4" Namespace="calico-system" Pod="calico-kube-controllers-667d8cbdd6-b7skv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667d8cbdd6--b7skv-eth0" Feb 13 20:00:06.672924 containerd[1437]: time="2025-02-13T20:00:06.671239162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:00:06.672924 containerd[1437]: time="2025-02-13T20:00:06.671288159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:00:06.672924 containerd[1437]: time="2025-02-13T20:00:06.671307078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:00:06.672924 containerd[1437]: time="2025-02-13T20:00:06.671394433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:00:06.710761 systemd[1]: Started cri-containerd-c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4.scope - libcontainer container c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4. Feb 13 20:00:06.722852 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:00:06.741561 containerd[1437]: time="2025-02-13T20:00:06.741520427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667d8cbdd6-b7skv,Uid:84ad6e0c-7334-4084-832e-ec9bda26d933,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4\"" Feb 13 20:00:06.749760 containerd[1437]: time="2025-02-13T20:00:06.749725883Z" level=info msg="CreateContainer within sandbox \"c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:00:06.758235 containerd[1437]: time="2025-02-13T20:00:06.758197004Z" level=info msg="CreateContainer within sandbox \"c7b08802f7da1162f1dfb50e0940c1a891c3f8340e0a3153e4f83d7e7069fef4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1bdcb0999f08e4c337b82da807bec0bf3a91800f13f5c8cbebd5e077b36fb915\"" Feb 13 20:00:06.758810 containerd[1437]: time="2025-02-13T20:00:06.758780611Z" level=info msg="StartContainer for \"1bdcb0999f08e4c337b82da807bec0bf3a91800f13f5c8cbebd5e077b36fb915\"" Feb 13 20:00:06.781751 systemd[1]: Started cri-containerd-1bdcb0999f08e4c337b82da807bec0bf3a91800f13f5c8cbebd5e077b36fb915.scope - libcontainer container 1bdcb0999f08e4c337b82da807bec0bf3a91800f13f5c8cbebd5e077b36fb915. Feb 13 20:00:06.810348 containerd[1437]: time="2025-02-13T20:00:06.810306537Z" level=info msg="StartContainer for \"1bdcb0999f08e4c337b82da807bec0bf3a91800f13f5c8cbebd5e077b36fb915\" returns successfully" Feb 13 20:00:06.902831 kubelet[2465]: I0213 20:00:06.902505 2465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda86074-cc02-4a98-a41e-338364b60d5a" path="/var/lib/kubelet/pods/bda86074-cc02-4a98-a41e-338364b60d5a/volumes" Feb 13 20:00:07.190714 kubelet[2465]: I0213 20:00:07.190230 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-667d8cbdd6-b7skv" podStartSLOduration=1.190212112 podStartE2EDuration="1.190212112s" podCreationTimestamp="2025-02-13 20:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:00:07.189198046 +0000 UTC m=+66.369961877" watchObservedRunningTime="2025-02-13 20:00:07.190212112 +0000 UTC m=+66.370975943" Feb 13 20:00:07.808779 systemd-networkd[1378]: cali4011afdb7d6: Gained IPv6LL Feb 13 20:00:08.979129 systemd[1]: cri-containerd-895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e.scope: Deactivated successfully. Feb 13 20:00:09.009747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e-rootfs.mount: Deactivated successfully. Feb 13 20:00:09.011361 containerd[1437]: time="2025-02-13T20:00:09.011125292Z" level=info msg="shim disconnected" id=895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e namespace=k8s.io Feb 13 20:00:09.011361 containerd[1437]: time="2025-02-13T20:00:09.011271125Z" level=warning msg="cleaning up after shim disconnected" id=895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e namespace=k8s.io Feb 13 20:00:09.011361 containerd[1437]: time="2025-02-13T20:00:09.011281124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:00:09.038727 containerd[1437]: time="2025-02-13T20:00:09.038681997Z" level=info msg="StopContainer for \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\" returns successfully" Feb 13 20:00:09.040245 containerd[1437]: time="2025-02-13T20:00:09.040212605Z" level=info msg="StopPodSandbox for \"42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2\"" Feb 13 20:00:09.040351 containerd[1437]: time="2025-02-13T20:00:09.040327439Z" level=info msg="Container to stop \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:00:09.042720 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2-shm.mount: Deactivated successfully. Feb 13 20:00:09.049721 systemd[1]: cri-containerd-42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2.scope: Deactivated successfully. Feb 13 20:00:09.078195 containerd[1437]: time="2025-02-13T20:00:09.078072426Z" level=info msg="shim disconnected" id=42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2 namespace=k8s.io Feb 13 20:00:09.078195 containerd[1437]: time="2025-02-13T20:00:09.078124023Z" level=warning msg="cleaning up after shim disconnected" id=42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2 namespace=k8s.io Feb 13 20:00:09.078195 containerd[1437]: time="2025-02-13T20:00:09.078133063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:00:09.081110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2-rootfs.mount: Deactivated successfully. Feb 13 20:00:09.096317 containerd[1437]: time="2025-02-13T20:00:09.096239532Z" level=info msg="TearDown network for sandbox \"42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2\" successfully" Feb 13 20:00:09.096317 containerd[1437]: time="2025-02-13T20:00:09.096270291Z" level=info msg="StopPodSandbox for \"42f3a0cf440448f268694f89de8d28227fbc2f4402ddc7abc69bc918502851a2\" returns successfully" Feb 13 20:00:09.191828 kubelet[2465]: I0213 20:00:09.191791 2465 scope.go:117] "RemoveContainer" containerID="895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e" Feb 13 20:00:09.193025 containerd[1437]: time="2025-02-13T20:00:09.192994905Z" level=info msg="RemoveContainer for \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\"" Feb 13 20:00:09.196147 containerd[1437]: time="2025-02-13T20:00:09.196102679Z" level=info msg="RemoveContainer for \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\" returns successfully" Feb 13 20:00:09.196376 kubelet[2465]: I0213 20:00:09.196331 2465 scope.go:117] "RemoveContainer" containerID="895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e" Feb 13 20:00:09.203041 containerd[1437]: time="2025-02-13T20:00:09.202939518Z" level=error msg="ContainerStatus for \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\": not found" Feb 13 20:00:09.205341 kubelet[2465]: E0213 20:00:09.205298 2465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\": not found" containerID="895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e" Feb 13 20:00:09.205423 kubelet[2465]: I0213 20:00:09.205341 2465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e"} err="failed to get container status \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"895a0ca0c257e3ab4dbf33e405a2ef609a76f7f80e331ebb7f7c953ae2204c2e\": not found" Feb 13 20:00:09.213616 kubelet[2465]: I0213 20:00:09.213439 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a79227a-66cb-4f8d-9ffe-99ad9a717416-tigera-ca-bundle\") pod \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\" (UID: \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\") " Feb 13 20:00:09.213616 kubelet[2465]: I0213 20:00:09.213493 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftbr6\" (UniqueName: \"kubernetes.io/projected/5a79227a-66cb-4f8d-9ffe-99ad9a717416-kube-api-access-ftbr6\") pod \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\" (UID: \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\") " Feb 13 20:00:09.213616 kubelet[2465]: I0213 20:00:09.213515 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5a79227a-66cb-4f8d-9ffe-99ad9a717416-typha-certs\") pod \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\" (UID: \"5a79227a-66cb-4f8d-9ffe-99ad9a717416\") " Feb 13 20:00:09.217660 systemd[1]: var-lib-kubelet-pods-5a79227a\x2d66cb\x2d4f8d\x2d9ffe\x2d99ad9a717416-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dftbr6.mount: Deactivated successfully. Feb 13 20:00:09.217773 systemd[1]: var-lib-kubelet-pods-5a79227a\x2d66cb\x2d4f8d\x2d9ffe\x2d99ad9a717416-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 13 20:00:09.218459 kubelet[2465]: I0213 20:00:09.218401 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a79227a-66cb-4f8d-9ffe-99ad9a717416-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "5a79227a-66cb-4f8d-9ffe-99ad9a717416" (UID: "5a79227a-66cb-4f8d-9ffe-99ad9a717416"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:00:09.219210 kubelet[2465]: I0213 20:00:09.219038 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a79227a-66cb-4f8d-9ffe-99ad9a717416-kube-api-access-ftbr6" (OuterVolumeSpecName: "kube-api-access-ftbr6") pod "5a79227a-66cb-4f8d-9ffe-99ad9a717416" (UID: "5a79227a-66cb-4f8d-9ffe-99ad9a717416"). InnerVolumeSpecName "kube-api-access-ftbr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:00:09.220242 kubelet[2465]: I0213 20:00:09.219966 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a79227a-66cb-4f8d-9ffe-99ad9a717416-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5a79227a-66cb-4f8d-9ffe-99ad9a717416" (UID: "5a79227a-66cb-4f8d-9ffe-99ad9a717416"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:00:09.221143 systemd[1]: var-lib-kubelet-pods-5a79227a\x2d66cb\x2d4f8d\x2d9ffe\x2d99ad9a717416-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 13 20:00:09.313890 kubelet[2465]: I0213 20:00:09.313826 2465 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5a79227a-66cb-4f8d-9ffe-99ad9a717416-typha-certs\") on node \"localhost\" DevicePath \"\"" Feb 13 20:00:09.313890 kubelet[2465]: I0213 20:00:09.313871 2465 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ftbr6\" (UniqueName: \"kubernetes.io/projected/5a79227a-66cb-4f8d-9ffe-99ad9a717416-kube-api-access-ftbr6\") on node \"localhost\" DevicePath \"\"" Feb 13 20:00:09.313890 kubelet[2465]: I0213 20:00:09.313891 2465 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a79227a-66cb-4f8d-9ffe-99ad9a717416-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Feb 13 20:00:09.492803 systemd[1]: Removed slice kubepods-besteffort-pod5a79227a_66cb_4f8d_9ffe_99ad9a717416.slice - libcontainer container kubepods-besteffort-pod5a79227a_66cb_4f8d_9ffe_99ad9a717416.slice. Feb 13 20:00:10.427369 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:40912.service - OpenSSH per-connection server daemon (10.0.0.1:40912). Feb 13 20:00:10.469223 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 40912 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:10.470542 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:10.477544 systemd-logind[1419]: New session 21 of user core. Feb 13 20:00:10.487716 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:00:10.614308 sshd[6006]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:10.617635 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:40912.service: Deactivated successfully. Feb 13 20:00:10.619529 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:00:10.620134 systemd-logind[1419]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:00:10.621102 systemd-logind[1419]: Removed session 21. Feb 13 20:00:10.898236 kubelet[2465]: I0213 20:00:10.898187 2465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a79227a-66cb-4f8d-9ffe-99ad9a717416" path="/var/lib/kubelet/pods/5a79227a-66cb-4f8d-9ffe-99ad9a717416/volumes" Feb 13 20:00:11.372252 kubelet[2465]: I0213 20:00:11.372214 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:00:12.896568 kubelet[2465]: E0213 20:00:12.896073 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:15.625759 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:56910.service - OpenSSH per-connection server daemon (10.0.0.1:56910). Feb 13 20:00:15.661478 sshd[6143]: Accepted publickey for core from 10.0.0.1 port 56910 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:15.662907 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:15.666701 systemd-logind[1419]: New session 22 of user core. Feb 13 20:00:15.675760 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:00:15.791195 sshd[6143]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:15.794412 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:56910.service: Deactivated successfully. Feb 13 20:00:15.796926 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:00:15.797911 systemd-logind[1419]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:00:15.798930 systemd-logind[1419]: Removed session 22.