Jan 30 12:55:31.993099 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 12:55:31.993137 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 12:55:31.993153 kernel: KASLR enabled Jan 30 12:55:31.993159 kernel: efi: EFI v2.7 by EDK II Jan 30 12:55:31.993166 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 12:55:31.993172 kernel: random: crng init done Jan 30 12:55:31.993179 kernel: ACPI: Early table checksum verification disabled Jan 30 12:55:31.993185 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 12:55:31.993192 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 12:55:31.993200 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993221 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993228 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993234 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993240 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993248 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993258 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993265 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993272 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:31.993294 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 12:55:31.993302 kernel: NUMA: Failed to initialise from firmware Jan 30 12:55:31.993309 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:55:31.993316 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 12:55:31.993322 kernel: Zone ranges: Jan 30 12:55:31.993329 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:55:31.993335 kernel: DMA32 empty Jan 30 12:55:31.993345 kernel: Normal empty Jan 30 12:55:31.993352 kernel: Movable zone start for each node Jan 30 12:55:31.993372 kernel: Early memory node ranges Jan 30 12:55:31.993379 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 12:55:31.993386 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 12:55:31.993393 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 12:55:31.993399 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 12:55:31.993424 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 12:55:31.993432 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 12:55:31.993439 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 12:55:31.993446 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:55:31.993452 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 12:55:31.993462 kernel: psci: probing for conduit method from ACPI. Jan 30 12:55:31.993468 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 12:55:31.993475 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 12:55:31.993501 kernel: psci: Trusted OS migration not required Jan 30 12:55:31.993509 kernel: psci: SMC Calling Convention v1.1 Jan 30 12:55:31.993516 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 12:55:31.993526 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 12:55:31.993533 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 12:55:31.993541 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 12:55:31.993548 kernel: Detected PIPT I-cache on CPU0 Jan 30 12:55:31.993555 kernel: CPU features: detected: GIC system register CPU interface Jan 30 12:55:31.993578 kernel: CPU features: detected: Hardware dirty bit management Jan 30 12:55:31.993586 kernel: CPU features: detected: Spectre-v4 Jan 30 12:55:31.993593 kernel: CPU features: detected: Spectre-BHB Jan 30 12:55:31.993600 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 12:55:31.993607 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 12:55:31.993617 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 12:55:31.993624 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 12:55:31.993631 kernel: alternatives: applying boot alternatives Jan 30 12:55:31.993639 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:55:31.993647 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:55:31.993654 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 12:55:31.993661 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:55:31.993668 kernel: Fallback order for Node 0: 0 Jan 30 12:55:31.993675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 12:55:31.993682 kernel: Policy zone: DMA Jan 30 12:55:31.993689 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:55:31.993697 kernel: software IO TLB: area num 4. Jan 30 12:55:31.993704 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 12:55:31.993712 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 30 12:55:31.993719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 12:55:31.993727 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:55:31.993734 kernel: rcu: RCU event tracing is enabled. Jan 30 12:55:31.993742 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 12:55:31.993749 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:55:31.993756 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:55:31.993763 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:55:31.993770 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 12:55:31.993777 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 12:55:31.993786 kernel: GICv3: 256 SPIs implemented Jan 30 12:55:31.993793 kernel: GICv3: 0 Extended SPIs implemented Jan 30 12:55:31.993801 kernel: Root IRQ handler: gic_handle_irq Jan 30 12:55:31.993808 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 12:55:31.993815 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 12:55:31.993822 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 12:55:31.993829 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 12:55:31.993837 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 12:55:31.993844 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 12:55:31.993851 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 12:55:31.993858 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:55:31.993867 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:31.993874 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 12:55:31.993882 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 12:55:31.993889 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 12:55:31.993896 kernel: arm-pv: using stolen time PV Jan 30 12:55:31.993904 kernel: Console: colour dummy device 80x25 Jan 30 12:55:31.993913 kernel: ACPI: Core revision 20230628 Jan 30 12:55:31.993921 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 12:55:31.993930 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:55:31.993938 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:55:31.993947 kernel: landlock: Up and running. Jan 30 12:55:31.993954 kernel: SELinux: Initializing. Jan 30 12:55:31.993961 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:55:31.993969 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:55:31.993976 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:55:31.993984 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:55:31.993991 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:55:31.993998 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:55:31.994006 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 12:55:31.994014 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 12:55:31.994022 kernel: Remapping and enabling EFI services. Jan 30 12:55:31.994065 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:55:31.994072 kernel: Detected PIPT I-cache on CPU1 Jan 30 12:55:31.994080 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 12:55:31.994088 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 12:55:31.994095 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:31.994102 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 12:55:31.994110 kernel: Detected PIPT I-cache on CPU2 Jan 30 12:55:31.994117 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 12:55:31.994128 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 12:55:31.994135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:31.994148 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 12:55:31.994157 kernel: Detected PIPT I-cache on CPU3 Jan 30 12:55:31.994165 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 12:55:31.994172 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 12:55:31.994179 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:55:31.994187 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 12:55:31.994194 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 12:55:31.994204 kernel: SMP: Total of 4 processors activated. Jan 30 12:55:31.994211 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 12:55:31.994219 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 12:55:31.994226 kernel: CPU features: detected: Common not Private translations Jan 30 12:55:31.994234 kernel: CPU features: detected: CRC32 instructions Jan 30 12:55:31.994241 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 12:55:31.994249 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 12:55:31.994256 kernel: CPU features: detected: LSE atomic instructions Jan 30 12:55:31.994265 kernel: CPU features: detected: Privileged Access Never Jan 30 12:55:31.994273 kernel: CPU features: detected: RAS Extension Support Jan 30 12:55:31.994281 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 12:55:31.994288 kernel: CPU: All CPU(s) started at EL1 Jan 30 12:55:31.994296 kernel: alternatives: applying system-wide alternatives Jan 30 12:55:31.994304 kernel: devtmpfs: initialized Jan 30 12:55:31.994312 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:55:31.994320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 12:55:31.994327 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:55:31.994336 kernel: SMBIOS 3.0.0 present. Jan 30 12:55:31.994344 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 12:55:31.994352 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:55:31.994359 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 12:55:31.994367 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 12:55:31.994374 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 12:55:31.994382 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:55:31.994389 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 30 12:55:31.994397 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:55:31.994423 kernel: cpuidle: using governor menu Jan 30 12:55:31.994431 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 12:55:31.994439 kernel: ASID allocator initialised with 32768 entries Jan 30 12:55:31.994447 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:55:31.994454 kernel: Serial: AMBA PL011 UART driver Jan 30 12:55:31.994462 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 12:55:31.994469 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 12:55:31.994477 kernel: Modules: 509040 pages in range for PLT usage Jan 30 12:55:31.994485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 12:55:31.994496 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 12:55:31.994504 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 12:55:31.994511 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 12:55:31.994519 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:55:31.994526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:55:31.994534 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 12:55:31.994541 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 12:55:31.994549 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:55:31.994556 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:55:31.994566 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:55:31.994573 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:55:31.994581 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:55:31.994589 kernel: ACPI: Interpreter enabled Jan 30 12:55:31.994596 kernel: ACPI: Using GIC for interrupt routing Jan 30 12:55:31.994604 kernel: ACPI: MCFG table detected, 1 entries Jan 30 12:55:31.994612 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 12:55:31.994619 kernel: printk: console [ttyAMA0] enabled Jan 30 12:55:31.994627 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:55:31.994791 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:55:31.994868 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 12:55:31.994938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 12:55:31.995003 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 12:55:31.995088 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 12:55:31.995099 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 12:55:31.995107 kernel: PCI host bridge to bus 0000:00 Jan 30 12:55:31.995187 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 12:55:31.995249 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 12:55:31.995311 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 12:55:31.995372 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:55:31.995470 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 12:55:31.995553 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 12:55:31.995631 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 12:55:31.995700 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 12:55:31.995768 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:55:31.995837 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:55:31.995906 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 12:55:31.995974 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 12:55:31.996055 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 12:55:31.996121 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 12:55:31.996184 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 12:55:31.996194 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 12:55:31.996202 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 12:55:31.996210 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 12:55:31.996217 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 12:55:31.996225 kernel: iommu: Default domain type: Translated Jan 30 12:55:31.996233 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 12:55:31.996240 kernel: efivars: Registered efivars operations Jan 30 12:55:31.996250 kernel: vgaarb: loaded Jan 30 12:55:31.996258 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 12:55:31.996265 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:55:31.996273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:55:31.996281 kernel: pnp: PnP ACPI init Jan 30 12:55:31.996359 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 12:55:31.996370 kernel: pnp: PnP ACPI: found 1 devices Jan 30 12:55:31.996378 kernel: NET: Registered PF_INET protocol family Jan 30 12:55:31.996388 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 12:55:31.996396 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 12:55:31.996409 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:55:31.996418 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:55:31.996426 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 12:55:31.996434 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 12:55:31.996442 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:55:31.996450 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:55:31.996457 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:55:31.996468 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:55:31.996476 kernel: kvm [1]: HYP mode not available Jan 30 12:55:31.996483 kernel: Initialise system trusted keyrings Jan 30 12:55:31.996490 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 12:55:31.996498 kernel: Key type asymmetric registered Jan 30 12:55:31.996505 kernel: Asymmetric key parser 'x509' registered Jan 30 12:55:31.996513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 12:55:31.996521 kernel: io scheduler mq-deadline registered Jan 30 12:55:31.996528 kernel: io scheduler kyber registered Jan 30 12:55:31.996537 kernel: io scheduler bfq registered Jan 30 12:55:31.996545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 12:55:31.996553 kernel: ACPI: button: Power Button [PWRB] Jan 30 12:55:31.996561 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 12:55:31.996644 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 12:55:31.996655 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:55:31.996663 kernel: thunder_xcv, ver 1.0 Jan 30 12:55:31.996670 kernel: thunder_bgx, ver 1.0 Jan 30 12:55:31.996677 kernel: nicpf, ver 1.0 Jan 30 12:55:31.996687 kernel: nicvf, ver 1.0 Jan 30 12:55:31.996766 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 12:55:31.996833 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T12:55:31 UTC (1738241731) Jan 30 12:55:31.996843 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 12:55:31.996851 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 12:55:31.996860 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 12:55:31.996868 kernel: watchdog: Hard watchdog permanently disabled Jan 30 12:55:31.996875 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:55:31.996885 kernel: Segment Routing with IPv6 Jan 30 12:55:31.996893 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:55:31.996900 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:55:31.996908 kernel: Key type dns_resolver registered Jan 30 12:55:31.996916 kernel: registered taskstats version 1 Jan 30 12:55:31.996924 kernel: Loading compiled-in X.509 certificates Jan 30 12:55:31.996937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 12:55:31.996945 kernel: Key type .fscrypt registered Jan 30 12:55:31.996953 kernel: Key type fscrypt-provisioning registered Jan 30 12:55:31.996962 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:55:31.996970 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:55:31.996978 kernel: ima: No architecture policies found Jan 30 12:55:31.996986 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 12:55:31.996993 kernel: clk: Disabling unused clocks Jan 30 12:55:31.997001 kernel: Freeing unused kernel memory: 39360K Jan 30 12:55:31.997009 kernel: Run /init as init process Jan 30 12:55:31.997016 kernel: with arguments: Jan 30 12:55:31.997034 kernel: /init Jan 30 12:55:31.997043 kernel: with environment: Jan 30 12:55:31.997051 kernel: HOME=/ Jan 30 12:55:31.997058 kernel: TERM=linux Jan 30 12:55:31.997066 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:55:31.997075 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:55:31.997086 systemd[1]: Detected virtualization kvm. Jan 30 12:55:31.997094 systemd[1]: Detected architecture arm64. Jan 30 12:55:31.997103 systemd[1]: Running in initrd. Jan 30 12:55:31.997112 systemd[1]: No hostname configured, using default hostname. Jan 30 12:55:31.997119 systemd[1]: Hostname set to . Jan 30 12:55:31.997128 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:55:31.997136 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:55:31.997144 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:31.997152 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:31.997161 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:55:31.997171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:55:31.997180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:55:31.997188 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:55:31.997198 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:55:31.997206 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:55:31.997214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:31.997222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:31.997232 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:55:31.997240 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:55:31.997251 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:55:31.997259 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:55:31.997267 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:55:31.997276 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:55:31.997284 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:55:31.997292 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:55:31.997300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:31.997310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:31.997318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:31.997327 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:55:31.997335 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:55:31.997343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:55:31.997351 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:55:31.997359 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:55:31.997369 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:55:31.997379 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:55:31.997387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:31.997395 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:55:31.997407 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:31.997418 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:55:31.997428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:55:31.997439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:55:31.997447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:55:31.997456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:31.997490 systemd-journald[239]: Collecting audit messages is disabled. Jan 30 12:55:31.997513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:31.997522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:31.997531 systemd-journald[239]: Journal started Jan 30 12:55:31.997551 systemd-journald[239]: Runtime Journal (/run/log/journal/34eb3411c2514a8cb8faac526f00212b) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:55:31.969803 systemd-modules-load[240]: Inserted module 'overlay' Jan 30 12:55:32.001174 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:55:32.003377 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 30 12:55:32.004116 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:55:32.004139 kernel: Bridge firewalling registered Jan 30 12:55:32.011461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:32.023224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:55:32.026892 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:55:32.029314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:32.033208 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:55:32.035535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:32.038004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:32.041417 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:55:32.051735 dracut-cmdline[272]: dracut-dracut-053 Jan 30 12:55:32.055271 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:55:32.076742 systemd-resolved[276]: Positive Trust Anchors: Jan 30 12:55:32.076761 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:55:32.076791 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:55:32.082198 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 30 12:55:32.087373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:55:32.088321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:32.133075 kernel: SCSI subsystem initialized Jan 30 12:55:32.141166 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:55:32.151073 kernel: iscsi: registered transport (tcp) Jan 30 12:55:32.165081 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:55:32.165138 kernel: QLogic iSCSI HBA Driver Jan 30 12:55:32.219976 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:55:32.232243 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:55:32.261016 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:55:32.261106 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:55:32.262045 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:55:32.317060 kernel: raid6: neonx8 gen() 15606 MB/s Jan 30 12:55:32.334050 kernel: raid6: neonx4 gen() 11906 MB/s Jan 30 12:55:32.351056 kernel: raid6: neonx2 gen() 11214 MB/s Jan 30 12:55:32.368043 kernel: raid6: neonx1 gen() 10423 MB/s Jan 30 12:55:32.385045 kernel: raid6: int64x8 gen() 6895 MB/s Jan 30 12:55:32.402047 kernel: raid6: int64x4 gen() 7306 MB/s Jan 30 12:55:32.419038 kernel: raid6: int64x2 gen() 6124 MB/s Jan 30 12:55:32.436074 kernel: raid6: int64x1 gen() 5020 MB/s Jan 30 12:55:32.436118 kernel: raid6: using algorithm neonx8 gen() 15606 MB/s Jan 30 12:55:32.454054 kernel: raid6: .... xor() 11830 MB/s, rmw enabled Jan 30 12:55:32.454079 kernel: raid6: using neon recovery algorithm Jan 30 12:55:32.459273 kernel: xor: measuring software checksum speed Jan 30 12:55:32.459299 kernel: 8regs : 19726 MB/sec Jan 30 12:55:32.460312 kernel: 32regs : 19191 MB/sec Jan 30 12:55:32.460330 kernel: arm64_neon : 27070 MB/sec Jan 30 12:55:32.460340 kernel: xor: using function: arm64_neon (27070 MB/sec) Jan 30 12:55:32.511066 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:55:32.523307 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:55:32.534264 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:32.549100 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 30 12:55:32.552475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:32.562237 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:55:32.574904 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jan 30 12:55:32.604701 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:55:32.614246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:55:32.656121 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:32.663326 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:55:32.677808 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:55:32.679260 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:55:32.681290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:32.683781 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:55:32.694048 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:55:32.700297 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 12:55:32.716704 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 12:55:32.716814 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:55:32.716825 kernel: GPT:9289727 != 19775487 Jan 30 12:55:32.716835 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:55:32.716852 kernel: GPT:9289727 != 19775487 Jan 30 12:55:32.716863 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:55:32.716872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:32.706817 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:55:32.710774 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:55:32.710888 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:32.712225 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:32.714904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:32.715067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:32.717066 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:32.729338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:32.739680 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (511) Jan 30 12:55:32.741068 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Jan 30 12:55:32.747060 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:32.754107 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:55:32.758827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:55:32.763493 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:55:32.767294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:55:32.768232 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:55:32.778214 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:55:32.780020 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:32.792936 disk-uuid[551]: Primary Header is updated. Jan 30 12:55:32.792936 disk-uuid[551]: Secondary Entries is updated. Jan 30 12:55:32.792936 disk-uuid[551]: Secondary Header is updated. Jan 30 12:55:32.797843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:32.801056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:33.815870 disk-uuid[558]: The operation has completed successfully. Jan 30 12:55:33.817177 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:33.844569 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:55:33.844675 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:55:33.870263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:55:33.873895 sh[575]: Success Jan 30 12:55:33.888222 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 12:55:33.927064 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:55:33.946572 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:55:33.948106 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:55:33.959622 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 12:55:33.959673 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:33.959692 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:55:33.959703 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:55:33.960268 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:55:33.964633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:55:33.965882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:55:33.966755 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:55:33.969561 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:55:33.981682 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:33.981748 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:33.981760 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:33.986066 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:33.994982 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:55:33.997043 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:34.006796 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:55:34.015234 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:55:34.082832 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:55:34.096283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:55:34.122590 systemd-networkd[764]: lo: Link UP Jan 30 12:55:34.122601 systemd-networkd[764]: lo: Gained carrier Jan 30 12:55:34.123309 systemd-networkd[764]: Enumeration completed Jan 30 12:55:34.124934 ignition[672]: Ignition 2.19.0 Jan 30 12:55:34.123432 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:55:34.124940 ignition[672]: Stage: fetch-offline Jan 30 12:55:34.123845 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:34.124979 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:34.123849 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:55:34.124987 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:34.124511 systemd[1]: Reached target network.target - Network. Jan 30 12:55:34.125147 ignition[672]: parsed url from cmdline: "" Jan 30 12:55:34.125511 systemd-networkd[764]: eth0: Link UP Jan 30 12:55:34.125150 ignition[672]: no config URL provided Jan 30 12:55:34.125515 systemd-networkd[764]: eth0: Gained carrier Jan 30 12:55:34.125155 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:55:34.125522 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:34.125162 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:55:34.125184 ignition[672]: op(1): [started] loading QEMU firmware config module Jan 30 12:55:34.125188 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 12:55:34.133629 ignition[672]: op(1): [finished] loading QEMU firmware config module Jan 30 12:55:34.157103 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:55:34.181114 ignition[672]: parsing config with SHA512: 2973bb3c6e7be42ae1b1217bc5026492122479a794eb43083164d69df8aa5a863c31ef065824b6eb936966ff6079d12da9e130977fc22422186a5280de4f3f7e Jan 30 12:55:34.187668 unknown[672]: fetched base config from "system" Jan 30 12:55:34.187677 unknown[672]: fetched user config from "qemu" Jan 30 12:55:34.188173 ignition[672]: fetch-offline: fetch-offline passed Jan 30 12:55:34.188235 ignition[672]: Ignition finished successfully Jan 30 12:55:34.191743 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:55:34.193278 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 12:55:34.199197 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:55:34.210333 ignition[771]: Ignition 2.19.0 Jan 30 12:55:34.210343 ignition[771]: Stage: kargs Jan 30 12:55:34.210544 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:34.210554 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:34.211621 ignition[771]: kargs: kargs passed Jan 30 12:55:34.214455 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:55:34.211673 ignition[771]: Ignition finished successfully Jan 30 12:55:34.235311 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:55:34.246668 ignition[779]: Ignition 2.19.0 Jan 30 12:55:34.246679 ignition[779]: Stage: disks Jan 30 12:55:34.246884 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:34.246893 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:34.248114 ignition[779]: disks: disks passed Jan 30 12:55:34.250457 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:55:34.248166 ignition[779]: Ignition finished successfully Jan 30 12:55:34.251957 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:55:34.253818 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:55:34.256364 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:55:34.257384 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:55:34.259527 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:55:34.273250 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:55:34.284276 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:55:34.288270 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:55:34.294202 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:55:34.344056 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 12:55:34.344654 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:55:34.345818 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:55:34.365152 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:55:34.366973 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:55:34.367890 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 12:55:34.367940 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:55:34.367967 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:55:34.374798 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Jan 30 12:55:34.374299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:55:34.376404 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:55:34.380513 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:34.380551 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:34.380562 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:34.383090 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:34.384544 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:55:34.441011 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:55:34.445814 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:55:34.450802 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:55:34.455542 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:55:34.561307 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:55:34.574182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:55:34.575679 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:55:34.582041 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:34.604504 ignition[912]: INFO : Ignition 2.19.0 Jan 30 12:55:34.604504 ignition[912]: INFO : Stage: mount Jan 30 12:55:34.607185 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:34.607185 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:34.607185 ignition[912]: INFO : mount: mount passed Jan 30 12:55:34.607185 ignition[912]: INFO : Ignition finished successfully Jan 30 12:55:34.606918 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:55:34.619163 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:55:34.620130 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:55:34.958438 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:55:34.967265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:55:34.974848 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Jan 30 12:55:34.974883 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:55:34.975703 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:55:34.975720 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:34.979056 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:34.979913 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:55:35.001525 ignition[943]: INFO : Ignition 2.19.0 Jan 30 12:55:35.001525 ignition[943]: INFO : Stage: files Jan 30 12:55:35.002912 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:35.002912 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:35.002912 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:55:35.006769 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:55:35.006769 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:55:35.009732 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:55:35.009732 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:55:35.009732 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:55:35.009277 unknown[943]: wrote ssh authorized keys file for user: core Jan 30 12:55:35.013908 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 12:55:35.013908 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 12:55:35.013908 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:55:35.013908 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 12:55:35.240503 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 12:55:35.406002 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:55:35.406002 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:55:35.409538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 12:55:35.726298 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 12:55:35.774062 systemd-networkd[764]: eth0: Gained IPv6LL Jan 30 12:55:35.967070 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:55:35.967070 ignition[943]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 30 12:55:35.970188 ignition[943]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 12:55:36.007137 ignition[943]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:55:36.011473 ignition[943]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:55:36.013905 ignition[943]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 12:55:36.013905 ignition[943]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 30 12:55:36.013905 ignition[943]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 12:55:36.013905 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:55:36.013905 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:55:36.013905 ignition[943]: INFO : files: files passed Jan 30 12:55:36.013905 ignition[943]: INFO : Ignition finished successfully Jan 30 12:55:36.015683 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:55:36.029272 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:55:36.031943 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:55:36.033383 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:55:36.033484 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:55:36.040648 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 12:55:36.044942 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:36.044942 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:36.048900 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:36.050701 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:55:36.052407 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:55:36.062251 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:55:36.088514 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:55:36.089497 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:55:36.090882 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:55:36.092333 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:55:36.093927 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:55:36.094865 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:55:36.116012 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:55:36.128269 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:55:36.138758 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:36.140250 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:36.142398 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:55:36.144011 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:55:36.144250 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:55:36.146477 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:55:36.148383 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:55:36.150040 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:55:36.151803 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:55:36.153713 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:55:36.155614 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:55:36.157352 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:55:36.159253 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:55:36.161103 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:55:36.162815 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:55:36.164414 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:55:36.164559 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:55:36.166742 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:36.168770 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:36.170663 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:55:36.175155 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:36.177241 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:55:36.177397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:55:36.180092 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:55:36.180229 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:55:36.181975 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:55:36.183427 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:55:36.188143 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:36.189279 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:55:36.191051 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:55:36.192405 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:55:36.192508 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:55:36.193758 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:55:36.193841 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:55:36.195102 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:55:36.195231 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:55:36.196826 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:55:36.196943 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:55:36.214680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:55:36.216258 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:55:36.217009 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:55:36.217147 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:36.218936 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:55:36.219054 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:55:36.234335 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:55:36.236957 ignition[998]: INFO : Ignition 2.19.0 Jan 30 12:55:36.236957 ignition[998]: INFO : Stage: umount Jan 30 12:55:36.236957 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:36.236957 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:55:36.236957 ignition[998]: INFO : umount: umount passed Jan 30 12:55:36.236957 ignition[998]: INFO : Ignition finished successfully Jan 30 12:55:36.234874 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:55:36.236095 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:55:36.238518 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:55:36.238639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:55:36.241360 systemd[1]: Stopped target network.target - Network. Jan 30 12:55:36.243466 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:55:36.243565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:55:36.244544 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:55:36.244591 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:55:36.246679 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:55:36.246736 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:55:36.249658 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:55:36.249715 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:55:36.251622 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:55:36.253504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:55:36.260021 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:55:36.260156 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:55:36.262086 systemd-networkd[764]: eth0: DHCPv6 lease lost Jan 30 12:55:36.263533 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:55:36.263931 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:36.266244 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:55:36.266397 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:55:36.268726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:55:36.268793 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:36.278178 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:55:36.279112 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:55:36.279189 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:55:36.281517 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:55:36.281663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:36.283362 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:55:36.283423 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:36.285765 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:36.296265 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:55:36.296422 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:55:36.301785 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:55:36.301958 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:36.304748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:55:36.304795 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:36.306419 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:55:36.306459 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:36.308985 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:55:36.309055 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:55:36.312634 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:55:36.312691 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:55:36.315729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:55:36.315779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:36.326241 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:55:36.327358 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:55:36.327435 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:36.330001 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 12:55:36.330063 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:55:36.332301 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:55:36.332354 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:36.334705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:36.334757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:36.337358 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:55:36.337478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:55:36.339483 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:55:36.339581 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:55:36.342233 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:55:36.343404 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:55:36.343470 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:55:36.346553 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:55:36.357770 systemd[1]: Switching root. Jan 30 12:55:36.393880 systemd-journald[239]: Journal stopped Jan 30 12:55:37.231979 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 30 12:55:37.232062 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:55:37.232078 kernel: SELinux: policy capability open_perms=1 Jan 30 12:55:37.232088 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:55:37.232100 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:55:37.232110 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:55:37.232121 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:55:37.232130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:55:37.232139 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:55:37.232149 kernel: audit: type=1403 audit(1738241736.649:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:55:37.232159 systemd[1]: Successfully loaded SELinux policy in 39.910ms. Jan 30 12:55:37.232179 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.450ms. Jan 30 12:55:37.232192 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:55:37.232204 systemd[1]: Detected virtualization kvm. Jan 30 12:55:37.232214 systemd[1]: Detected architecture arm64. Jan 30 12:55:37.232224 systemd[1]: Detected first boot. Jan 30 12:55:37.232235 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:55:37.232249 zram_generator::config[1065]: No configuration found. Jan 30 12:55:37.232260 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:55:37.232343 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:55:37.232370 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:55:37.232390 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:55:37.232401 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:55:37.232414 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:55:37.232426 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:55:37.232436 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:55:37.232448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:55:37.232458 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:55:37.232468 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:55:37.232481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:37.232493 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:37.232504 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:55:37.232515 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:55:37.232526 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:55:37.232537 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:55:37.232548 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 12:55:37.232559 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:37.232570 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:55:37.232583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:37.232594 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:55:37.232604 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:55:37.232615 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:55:37.232625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:55:37.232636 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:55:37.232648 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:55:37.232659 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:55:37.232671 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:37.232682 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:37.232693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:37.232703 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:55:37.232715 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:55:37.232726 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:55:37.232736 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:55:37.232747 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:55:37.232757 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:55:37.232774 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:55:37.232785 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:55:37.232796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:37.232807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:55:37.232817 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:55:37.232828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:37.232838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:55:37.232848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:37.232859 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:55:37.232871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:37.232883 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:55:37.232894 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 12:55:37.232905 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 12:55:37.232915 kernel: fuse: init (API version 7.39) Jan 30 12:55:37.232925 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:55:37.232935 kernel: ACPI: bus type drm_connector registered Jan 30 12:55:37.232945 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:55:37.232955 kernel: loop: module loaded Jan 30 12:55:37.232967 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:55:37.232978 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:55:37.233012 systemd-journald[1143]: Collecting audit messages is disabled. Jan 30 12:55:37.233052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:55:37.233066 systemd-journald[1143]: Journal started Jan 30 12:55:37.233088 systemd-journald[1143]: Runtime Journal (/run/log/journal/34eb3411c2514a8cb8faac526f00212b) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:55:37.239812 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:55:37.243153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:55:37.244113 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:55:37.245147 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:55:37.245958 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:55:37.246934 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:55:37.247935 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:55:37.249128 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:37.250287 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:55:37.250470 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:55:37.251579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:37.251729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:37.252881 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:55:37.253073 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:55:37.254129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:37.254285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:37.255661 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:55:37.255823 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:55:37.257104 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:55:37.258194 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:37.258405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:37.259584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:37.261104 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:55:37.262278 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:55:37.273695 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:55:37.287209 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:55:37.289216 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:55:37.290058 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:55:37.293272 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:55:37.298251 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:55:37.299150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:37.301133 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:55:37.302151 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:37.306270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:55:37.306639 systemd-journald[1143]: Time spent on flushing to /var/log/journal/34eb3411c2514a8cb8faac526f00212b is 12.324ms for 844 entries. Jan 30 12:55:37.306639 systemd-journald[1143]: System Journal (/var/log/journal/34eb3411c2514a8cb8faac526f00212b) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:55:37.325634 systemd-journald[1143]: Received client request to flush runtime journal. Jan 30 12:55:37.309695 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:55:37.318681 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:37.320306 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:55:37.321826 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:55:37.323355 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:55:37.328231 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:55:37.337277 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:55:37.339138 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:55:37.347815 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 30 12:55:37.347827 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 30 12:55:37.353988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:37.355875 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:55:37.360506 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:55:37.364208 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 12:55:37.393173 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:55:37.401304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:55:37.413762 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 30 12:55:37.413783 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 30 12:55:37.418248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:37.827997 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:55:37.836249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:37.856634 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Jan 30 12:55:37.875846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:37.885295 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:55:37.906407 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:55:37.908721 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 30 12:55:37.918190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1236) Jan 30 12:55:37.948691 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:55:37.983240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:55:38.017321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:38.029637 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:55:38.046880 systemd-networkd[1231]: lo: Link UP Jan 30 12:55:38.046893 systemd-networkd[1231]: lo: Gained carrier Jan 30 12:55:38.047240 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:55:38.047638 systemd-networkd[1231]: Enumeration completed Jan 30 12:55:38.048128 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:38.048131 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:55:38.048816 systemd-networkd[1231]: eth0: Link UP Jan 30 12:55:38.048819 systemd-networkd[1231]: eth0: Gained carrier Jan 30 12:55:38.048831 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:38.049048 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:55:38.051968 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:55:38.056605 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:55:38.069235 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:55:38.073509 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:38.091624 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:55:38.092938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:38.105342 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:55:38.109547 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:55:38.135725 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:55:38.136986 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:55:38.138035 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:55:38.138068 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:55:38.138852 systemd[1]: Reached target machines.target - Containers. Jan 30 12:55:38.140808 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:55:38.156223 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:55:38.158485 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:55:38.159649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:38.160782 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:55:38.165309 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:55:38.168184 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:55:38.169874 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:55:38.188055 kernel: loop0: detected capacity change from 0 to 114328 Jan 30 12:55:38.189435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:55:38.190408 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:55:38.191960 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:55:38.203051 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:55:38.241093 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 12:55:38.284054 kernel: loop2: detected capacity change from 0 to 194096 Jan 30 12:55:38.336131 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 12:55:38.347113 kernel: loop4: detected capacity change from 0 to 114432 Jan 30 12:55:38.355448 kernel: loop5: detected capacity change from 0 to 194096 Jan 30 12:55:38.370935 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 12:55:38.371424 (sd-merge)[1291]: Merged extensions into '/usr'. Jan 30 12:55:38.380863 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:55:38.380880 systemd[1]: Reloading... Jan 30 12:55:38.433306 zram_generator::config[1320]: No configuration found. Jan 30 12:55:38.485224 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:55:38.552151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:38.597803 systemd[1]: Reloading finished in 216 ms. Jan 30 12:55:38.615590 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:55:38.616930 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:55:38.634258 systemd[1]: Starting ensure-sysext.service... Jan 30 12:55:38.636343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:55:38.644466 systemd[1]: Reloading requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:55:38.644484 systemd[1]: Reloading... Jan 30 12:55:38.661163 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:55:38.661471 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:55:38.662200 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:55:38.662464 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Jan 30 12:55:38.662511 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Jan 30 12:55:38.665946 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:55:38.665962 systemd-tmpfiles[1364]: Skipping /boot Jan 30 12:55:38.673611 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:55:38.673627 systemd-tmpfiles[1364]: Skipping /boot Jan 30 12:55:38.714065 zram_generator::config[1395]: No configuration found. Jan 30 12:55:38.827841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:38.872097 systemd[1]: Reloading finished in 227 ms. Jan 30 12:55:38.889097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:38.906396 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:55:38.908831 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:55:38.911340 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:55:38.914227 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:55:38.917213 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:55:38.924937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:38.928552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:38.930897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:38.934297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:38.935281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:38.935955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:38.936152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:38.947669 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:55:38.950934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:38.951532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:38.954298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:38.955068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:38.959224 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:55:38.966590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:38.976421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:38.981381 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:55:38.985975 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:38.990981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:38.992188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:38.996388 augenrules[1474]: No rules Jan 30 12:55:39.011372 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:55:39.013704 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:55:39.015628 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:55:39.017572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:39.017750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:39.019598 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:55:39.019755 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:55:39.021404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:39.021562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:39.023322 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:39.023489 systemd-resolved[1439]: Positive Trust Anchors: Jan 30 12:55:39.023507 systemd-resolved[1439]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:55:39.023542 systemd-resolved[1439]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:55:39.023604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:39.025595 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:55:39.032227 systemd[1]: Finished ensure-sysext.service. Jan 30 12:55:39.035140 systemd-resolved[1439]: Defaulting to hostname 'linux'. Jan 30 12:55:39.037979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:55:39.039604 systemd[1]: Reached target network.target - Network. Jan 30 12:55:39.040638 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:39.042248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:39.042377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:39.055260 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:55:39.056578 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:55:39.116300 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:55:39.117234 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 12:55:39.117289 systemd-timesyncd[1497]: Initial clock synchronization to Thu 2025-01-30 12:55:38.878381 UTC. Jan 30 12:55:39.118088 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:55:39.119294 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:55:39.120654 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:55:39.121993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:55:39.123363 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:55:39.123408 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:55:39.124362 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:55:39.125622 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:55:39.126918 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:55:39.128285 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:55:39.130153 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:55:39.133057 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:55:39.135694 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:55:39.142245 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:55:39.143181 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:55:39.143995 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:55:39.144975 systemd[1]: System is tainted: cgroupsv1 Jan 30 12:55:39.145051 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:55:39.145075 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:55:39.146407 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:55:39.148598 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:55:39.150585 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:55:39.155226 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:55:39.156311 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:55:39.157721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:55:39.162251 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 12:55:39.170295 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:55:39.172711 jq[1503]: false Jan 30 12:55:39.177526 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:55:39.188396 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:55:39.191647 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:55:39.195129 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:55:39.198974 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:55:39.199332 extend-filesystems[1504]: Found loop3 Jan 30 12:55:39.201082 extend-filesystems[1504]: Found loop4 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found loop5 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda1 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda2 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda3 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found usr Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda4 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda6 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda7 Jan 30 12:55:39.205610 extend-filesystems[1504]: Found vda9 Jan 30 12:55:39.205610 extend-filesystems[1504]: Checking size of /dev/vda9 Jan 30 12:55:39.202458 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:55:39.214873 dbus-daemon[1502]: [system] SELinux support is enabled Jan 30 12:55:39.202744 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:55:39.226923 jq[1520]: true Jan 30 12:55:39.207528 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:55:39.207786 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:55:39.218597 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:55:39.222084 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:55:39.222433 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:55:39.236936 jq[1532]: true Jan 30 12:55:39.251918 extend-filesystems[1504]: Resized partition /dev/vda9 Jan 30 12:55:39.255428 tar[1526]: linux-arm64/helm Jan 30 12:55:39.262785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1237) Jan 30 12:55:39.259170 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:55:39.259203 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:55:39.259617 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:55:39.261099 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:55:39.261121 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:55:39.269127 extend-filesystems[1546]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:55:39.275702 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 12:55:39.317064 update_engine[1518]: I20250130 12:55:39.315232 1518 main.cc:92] Flatcar Update Engine starting Jan 30 12:55:39.320413 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:55:39.322184 update_engine[1518]: I20250130 12:55:39.320458 1518 update_check_scheduler.cc:74] Next update check in 10m31s Jan 30 12:55:39.323934 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:55:39.330240 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:55:39.341100 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 12:55:39.348084 systemd-logind[1515]: New seat seat0. Jan 30 12:55:39.350481 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:55:39.372062 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 12:55:39.392708 extend-filesystems[1546]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:55:39.392708 extend-filesystems[1546]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 12:55:39.392708 extend-filesystems[1546]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 12:55:39.398261 extend-filesystems[1504]: Resized filesystem in /dev/vda9 Jan 30 12:55:39.397889 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:55:39.398393 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:55:39.402859 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:55:39.404245 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:55:39.407841 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 12:55:39.425218 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:55:39.515789 containerd[1534]: time="2025-01-30T12:55:39.515674080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 12:55:39.545252 containerd[1534]: time="2025-01-30T12:55:39.545191000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547132 containerd[1534]: time="2025-01-30T12:55:39.547074760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547132 containerd[1534]: time="2025-01-30T12:55:39.547119960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:55:39.547250 containerd[1534]: time="2025-01-30T12:55:39.547145200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:55:39.547330 containerd[1534]: time="2025-01-30T12:55:39.547309240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:55:39.547364 containerd[1534]: time="2025-01-30T12:55:39.547333000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547419 containerd[1534]: time="2025-01-30T12:55:39.547402920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547440 containerd[1534]: time="2025-01-30T12:55:39.547422920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547677 containerd[1534]: time="2025-01-30T12:55:39.547638560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547677 containerd[1534]: time="2025-01-30T12:55:39.547664760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547729 containerd[1534]: time="2025-01-30T12:55:39.547680560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547729 containerd[1534]: time="2025-01-30T12:55:39.547691200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:39.547791 containerd[1534]: time="2025-01-30T12:55:39.547775160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:39.548006 containerd[1534]: time="2025-01-30T12:55:39.547978680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:39.548354 containerd[1534]: time="2025-01-30T12:55:39.548320880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:39.548379 containerd[1534]: time="2025-01-30T12:55:39.548363520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:55:39.548491 containerd[1534]: time="2025-01-30T12:55:39.548474600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:55:39.548543 containerd[1534]: time="2025-01-30T12:55:39.548525760Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:55:39.554649 containerd[1534]: time="2025-01-30T12:55:39.554588600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:55:39.554792 containerd[1534]: time="2025-01-30T12:55:39.554668200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:55:39.554792 containerd[1534]: time="2025-01-30T12:55:39.554687240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:55:39.554792 containerd[1534]: time="2025-01-30T12:55:39.554704680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:55:39.554792 containerd[1534]: time="2025-01-30T12:55:39.554721640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:55:39.554946 containerd[1534]: time="2025-01-30T12:55:39.554917360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:55:39.556381 containerd[1534]: time="2025-01-30T12:55:39.556331360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:55:39.556616 containerd[1534]: time="2025-01-30T12:55:39.556596160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:55:39.556677 containerd[1534]: time="2025-01-30T12:55:39.556623240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:55:39.556702 containerd[1534]: time="2025-01-30T12:55:39.556639480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:55:39.556722 containerd[1534]: time="2025-01-30T12:55:39.556708480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556741 containerd[1534]: time="2025-01-30T12:55:39.556723800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556760 containerd[1534]: time="2025-01-30T12:55:39.556738240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556760 containerd[1534]: time="2025-01-30T12:55:39.556753560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556799 containerd[1534]: time="2025-01-30T12:55:39.556769160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556799 containerd[1534]: time="2025-01-30T12:55:39.556784920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556843 containerd[1534]: time="2025-01-30T12:55:39.556797760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556843 containerd[1534]: time="2025-01-30T12:55:39.556811840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:55:39.556843 containerd[1534]: time="2025-01-30T12:55:39.556838800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.556893 containerd[1534]: time="2025-01-30T12:55:39.556855040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.556893 containerd[1534]: time="2025-01-30T12:55:39.556869800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.556937 containerd[1534]: time="2025-01-30T12:55:39.556898520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.556937 containerd[1534]: time="2025-01-30T12:55:39.556913160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.556937 containerd[1534]: time="2025-01-30T12:55:39.556928280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557005 containerd[1534]: time="2025-01-30T12:55:39.556941160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557005 containerd[1534]: time="2025-01-30T12:55:39.556956200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557005 containerd[1534]: time="2025-01-30T12:55:39.556970240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557005 containerd[1534]: time="2025-01-30T12:55:39.556985720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557005 containerd[1534]: time="2025-01-30T12:55:39.556999120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557098 containerd[1534]: time="2025-01-30T12:55:39.557012640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557098 containerd[1534]: time="2025-01-30T12:55:39.557044520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557098 containerd[1534]: time="2025-01-30T12:55:39.557064400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:55:39.557098 containerd[1534]: time="2025-01-30T12:55:39.557087520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557173 containerd[1534]: time="2025-01-30T12:55:39.557100720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.557173 containerd[1534]: time="2025-01-30T12:55:39.557114000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:55:39.557503 containerd[1534]: time="2025-01-30T12:55:39.557490320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:55:39.557545 containerd[1534]: time="2025-01-30T12:55:39.557512800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:55:39.557545 containerd[1534]: time="2025-01-30T12:55:39.557525360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:55:39.557545 containerd[1534]: time="2025-01-30T12:55:39.557540000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:55:39.558136 containerd[1534]: time="2025-01-30T12:55:39.557550520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.558136 containerd[1534]: time="2025-01-30T12:55:39.557575400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:55:39.558136 containerd[1534]: time="2025-01-30T12:55:39.557597560Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:55:39.558136 containerd[1534]: time="2025-01-30T12:55:39.557752200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:55:39.558765 containerd[1534]: time="2025-01-30T12:55:39.558677280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:55:39.558765 containerd[1534]: time="2025-01-30T12:55:39.558755360Z" level=info msg="Connect containerd service" Jan 30 12:55:39.558902 containerd[1534]: time="2025-01-30T12:55:39.558835120Z" level=info msg="using legacy CRI server" Jan 30 12:55:39.558902 containerd[1534]: time="2025-01-30T12:55:39.558847200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:55:39.558969 containerd[1534]: time="2025-01-30T12:55:39.558945600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:55:39.560502 containerd[1534]: time="2025-01-30T12:55:39.560460040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:55:39.561528 containerd[1534]: time="2025-01-30T12:55:39.560867840Z" level=info msg="Start subscribing containerd event" Jan 30 12:55:39.561528 containerd[1534]: time="2025-01-30T12:55:39.560931640Z" level=info msg="Start recovering state" Jan 30 12:55:39.561528 containerd[1534]: time="2025-01-30T12:55:39.561011400Z" level=info msg="Start event monitor" Jan 30 12:55:39.561528 containerd[1534]: time="2025-01-30T12:55:39.561041720Z" level=info msg="Start snapshots syncer" Jan 30 12:55:39.561528 containerd[1534]: time="2025-01-30T12:55:39.561054800Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:55:39.561528 containerd[1534]: time="2025-01-30T12:55:39.561065040Z" level=info msg="Start streaming server" Jan 30 12:55:39.561697 containerd[1534]: time="2025-01-30T12:55:39.561660720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:55:39.561719 containerd[1534]: time="2025-01-30T12:55:39.561705680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:55:39.561778 containerd[1534]: time="2025-01-30T12:55:39.561760720Z" level=info msg="containerd successfully booted in 0.047552s" Jan 30 12:55:39.561903 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:55:39.687034 tar[1526]: linux-arm64/LICENSE Jan 30 12:55:39.687131 tar[1526]: linux-arm64/README.md Jan 30 12:55:39.698477 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 12:55:39.805291 systemd-networkd[1231]: eth0: Gained IPv6LL Jan 30 12:55:39.808579 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:55:39.810137 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:55:39.816379 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:55:39.819186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:39.822416 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:55:39.847385 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:55:39.847664 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:55:39.850087 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:55:39.861570 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:55:39.969280 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:55:39.991899 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:55:40.003357 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:55:40.011267 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:55:40.011663 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:55:40.015718 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:55:40.036318 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:55:40.039226 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:55:40.041390 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 12:55:40.042792 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:55:40.392550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:40.393939 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:55:40.397518 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:40.398197 systemd[1]: Startup finished in 5.532s (kernel) + 3.795s (userspace) = 9.328s. Jan 30 12:55:40.954878 kubelet[1640]: E0130 12:55:40.954818 1640 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:40.957021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:40.957207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:44.154663 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:55:44.171312 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:55860.service - OpenSSH per-connection server daemon (10.0.0.1:55860). Jan 30 12:55:44.220869 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 55860 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:44.223168 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.238108 systemd-logind[1515]: New session 1 of user core. Jan 30 12:55:44.238938 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:55:44.253310 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:55:44.265055 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:55:44.266952 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:55:44.273977 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:55:44.371701 systemd[1660]: Queued start job for default target default.target. Jan 30 12:55:44.372098 systemd[1660]: Created slice app.slice - User Application Slice. Jan 30 12:55:44.372122 systemd[1660]: Reached target paths.target - Paths. Jan 30 12:55:44.372133 systemd[1660]: Reached target timers.target - Timers. Jan 30 12:55:44.383171 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:55:44.390435 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:55:44.390661 systemd[1660]: Reached target sockets.target - Sockets. Jan 30 12:55:44.390680 systemd[1660]: Reached target basic.target - Basic System. Jan 30 12:55:44.390736 systemd[1660]: Reached target default.target - Main User Target. Jan 30 12:55:44.390814 systemd[1660]: Startup finished in 109ms. Jan 30 12:55:44.390952 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:55:44.392459 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:55:44.453306 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:55890.service - OpenSSH per-connection server daemon (10.0.0.1:55890). Jan 30 12:55:44.499921 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 55890 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:44.501687 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.505992 systemd-logind[1515]: New session 2 of user core. Jan 30 12:55:44.519364 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:55:44.574311 sshd[1672]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:44.581304 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:55892.service - OpenSSH per-connection server daemon (10.0.0.1:55892). Jan 30 12:55:44.581699 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:55890.service: Deactivated successfully. Jan 30 12:55:44.584142 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:55:44.584863 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:55:44.586554 systemd-logind[1515]: Removed session 2. Jan 30 12:55:44.619991 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 55892 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:44.621497 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.626267 systemd-logind[1515]: New session 3 of user core. Jan 30 12:55:44.633340 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:55:44.683934 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:44.691281 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:55912.service - OpenSSH per-connection server daemon (10.0.0.1:55912). Jan 30 12:55:44.691671 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:55892.service: Deactivated successfully. Jan 30 12:55:44.693482 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:55:44.694044 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:55:44.695495 systemd-logind[1515]: Removed session 3. Jan 30 12:55:44.728868 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 55912 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:44.730464 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.736085 systemd-logind[1515]: New session 4 of user core. Jan 30 12:55:44.747335 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:55:44.800826 sshd[1685]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:44.808334 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:55922.service - OpenSSH per-connection server daemon (10.0.0.1:55922). Jan 30 12:55:44.808736 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:55912.service: Deactivated successfully. Jan 30 12:55:44.810885 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:55:44.811502 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:55:44.812875 systemd-logind[1515]: Removed session 4. Jan 30 12:55:44.845646 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 55922 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:44.846974 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.852508 systemd-logind[1515]: New session 5 of user core. Jan 30 12:55:44.869342 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:55:44.938378 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:55:44.941890 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:44.955986 sudo[1700]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:44.960809 sshd[1693]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:44.966319 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:55940.service - OpenSSH per-connection server daemon (10.0.0.1:55940). Jan 30 12:55:44.966727 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:55922.service: Deactivated successfully. Jan 30 12:55:44.969070 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:55:44.969312 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:55:44.971319 systemd-logind[1515]: Removed session 5. Jan 30 12:55:45.005202 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 55940 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:45.006549 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:45.010872 systemd-logind[1515]: New session 6 of user core. Jan 30 12:55:45.020444 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:55:45.072946 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:55:45.073263 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:45.076698 sudo[1710]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:45.082940 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 12:55:45.084345 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:45.107296 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 12:55:45.108796 auditctl[1713]: No rules Jan 30 12:55:45.109704 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:55:45.109988 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 12:55:45.111851 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:55:45.141495 augenrules[1732]: No rules Jan 30 12:55:45.142850 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:55:45.143840 sudo[1709]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:45.145968 sshd[1702]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:45.160375 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:55950.service - OpenSSH per-connection server daemon (10.0.0.1:55950). Jan 30 12:55:45.160818 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:55940.service: Deactivated successfully. Jan 30 12:55:45.162453 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:55:45.163733 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:55:45.164747 systemd-logind[1515]: Removed session 6. Jan 30 12:55:45.196164 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 55950 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:55:45.197586 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:45.204182 systemd-logind[1515]: New session 7 of user core. Jan 30 12:55:45.216409 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:55:45.268512 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:55:45.268820 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:45.596318 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 12:55:45.596594 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 12:55:45.889113 dockerd[1763]: time="2025-01-30T12:55:45.888956421Z" level=info msg="Starting up" Jan 30 12:55:46.184571 dockerd[1763]: time="2025-01-30T12:55:46.184458211Z" level=info msg="Loading containers: start." Jan 30 12:55:46.291056 kernel: Initializing XFRM netlink socket Jan 30 12:55:46.369947 systemd-networkd[1231]: docker0: Link UP Jan 30 12:55:46.393557 dockerd[1763]: time="2025-01-30T12:55:46.393439313Z" level=info msg="Loading containers: done." Jan 30 12:55:46.413408 dockerd[1763]: time="2025-01-30T12:55:46.413334743Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 12:55:46.413587 dockerd[1763]: time="2025-01-30T12:55:46.413485378Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 12:55:46.413614 dockerd[1763]: time="2025-01-30T12:55:46.413604244Z" level=info msg="Daemon has completed initialization" Jan 30 12:55:46.455354 dockerd[1763]: time="2025-01-30T12:55:46.455101736Z" level=info msg="API listen on /run/docker.sock" Jan 30 12:55:46.455407 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 12:55:47.192915 containerd[1534]: time="2025-01-30T12:55:47.192866350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 12:55:48.044047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639499557.mount: Deactivated successfully. Jan 30 12:55:49.249867 containerd[1534]: time="2025-01-30T12:55:49.249802232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.251313 containerd[1534]: time="2025-01-30T12:55:49.251274740Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 30 12:55:49.252479 containerd[1534]: time="2025-01-30T12:55:49.252443915Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.257086 containerd[1534]: time="2025-01-30T12:55:49.256842213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.257752 containerd[1534]: time="2025-01-30T12:55:49.257637968Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.064725223s" Jan 30 12:55:49.257752 containerd[1534]: time="2025-01-30T12:55:49.257677173Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 12:55:49.277512 containerd[1534]: time="2025-01-30T12:55:49.277465848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 12:55:50.549073 containerd[1534]: time="2025-01-30T12:55:50.548949485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.550300 containerd[1534]: time="2025-01-30T12:55:50.550264558Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 30 12:55:50.551640 containerd[1534]: time="2025-01-30T12:55:50.551594078Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.556748 containerd[1534]: time="2025-01-30T12:55:50.555063186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.556748 containerd[1534]: time="2025-01-30T12:55:50.556223960Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.278719532s" Jan 30 12:55:50.556748 containerd[1534]: time="2025-01-30T12:55:50.556257217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 12:55:50.578541 containerd[1534]: time="2025-01-30T12:55:50.578501274Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 12:55:51.187051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 12:55:51.196950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:51.308168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:51.312412 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:51.355394 kubelet[2002]: E0130 12:55:51.355291 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:51.358428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:51.358609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:51.647284 containerd[1534]: time="2025-01-30T12:55:51.646862678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:51.649511 containerd[1534]: time="2025-01-30T12:55:51.648507739Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 30 12:55:51.650869 containerd[1534]: time="2025-01-30T12:55:51.650820700Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:51.658062 containerd[1534]: time="2025-01-30T12:55:51.658010605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:51.659537 containerd[1534]: time="2025-01-30T12:55:51.659481868Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.080922032s" Jan 30 12:55:51.659584 containerd[1534]: time="2025-01-30T12:55:51.659539509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 12:55:51.680106 containerd[1534]: time="2025-01-30T12:55:51.680060560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 12:55:52.656454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150830975.mount: Deactivated successfully. Jan 30 12:55:52.996577 containerd[1534]: time="2025-01-30T12:55:52.996504482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:52.997649 containerd[1534]: time="2025-01-30T12:55:52.997595245Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 12:55:52.998838 containerd[1534]: time="2025-01-30T12:55:52.998797177Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:53.001856 containerd[1534]: time="2025-01-30T12:55:53.001812414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:53.002938 containerd[1534]: time="2025-01-30T12:55:53.002887410Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.322781986s" Jan 30 12:55:53.002938 containerd[1534]: time="2025-01-30T12:55:53.002929508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 12:55:53.021830 containerd[1534]: time="2025-01-30T12:55:53.021793207Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 12:55:53.540231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706148790.mount: Deactivated successfully. Jan 30 12:55:54.209797 containerd[1534]: time="2025-01-30T12:55:54.209731509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.210291 containerd[1534]: time="2025-01-30T12:55:54.210251943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 12:55:54.211227 containerd[1534]: time="2025-01-30T12:55:54.211200239Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.214353 containerd[1534]: time="2025-01-30T12:55:54.214299947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.215537 containerd[1534]: time="2025-01-30T12:55:54.215494783Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.193658557s" Jan 30 12:55:54.215581 containerd[1534]: time="2025-01-30T12:55:54.215537267Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 12:55:54.235326 containerd[1534]: time="2025-01-30T12:55:54.235290220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 12:55:54.659869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930402729.mount: Deactivated successfully. Jan 30 12:55:54.674055 containerd[1534]: time="2025-01-30T12:55:54.673972790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.676241 containerd[1534]: time="2025-01-30T12:55:54.676193562Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 30 12:55:54.677148 containerd[1534]: time="2025-01-30T12:55:54.677111757Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.679439 containerd[1534]: time="2025-01-30T12:55:54.679394881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.680397 containerd[1534]: time="2025-01-30T12:55:54.680356794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 444.877647ms" Jan 30 12:55:54.680437 containerd[1534]: time="2025-01-30T12:55:54.680393305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 12:55:54.700563 containerd[1534]: time="2025-01-30T12:55:54.700519174Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 12:55:55.274576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235254993.mount: Deactivated successfully. Jan 30 12:55:56.604723 containerd[1534]: time="2025-01-30T12:55:56.604661527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:56.606032 containerd[1534]: time="2025-01-30T12:55:56.605981260Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 30 12:55:56.607521 containerd[1534]: time="2025-01-30T12:55:56.607486139Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:56.610629 containerd[1534]: time="2025-01-30T12:55:56.610594149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:56.612051 containerd[1534]: time="2025-01-30T12:55:56.611973233Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.911406786s" Jan 30 12:55:56.612051 containerd[1534]: time="2025-01-30T12:55:56.612016639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 12:56:01.437077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 12:56:01.446257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:01.651453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:01.653773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:01.654824 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:56:01.655098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:01.669212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:01.681582 systemd[1]: Reloading requested from client PID 2238 ('systemctl') (unit session-7.scope)... Jan 30 12:56:01.681602 systemd[1]: Reloading... Jan 30 12:56:01.749266 zram_generator::config[2273]: No configuration found. Jan 30 12:56:01.938907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:56:01.991674 systemd[1]: Reloading finished in 309 ms. Jan 30 12:56:02.038448 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 12:56:02.038512 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 12:56:02.038776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:02.041545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:02.139374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:02.144279 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:56:02.196094 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:02.196094 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:56:02.196094 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:02.196920 kubelet[2335]: I0130 12:56:02.196690 2335 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:56:03.306157 kubelet[2335]: I0130 12:56:03.306108 2335 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:56:03.306157 kubelet[2335]: I0130 12:56:03.306141 2335 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:56:03.306531 kubelet[2335]: I0130 12:56:03.306343 2335 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:56:03.347703 kubelet[2335]: E0130 12:56:03.346878 2335 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.347851 kubelet[2335]: I0130 12:56:03.347832 2335 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:56:03.360186 kubelet[2335]: I0130 12:56:03.360144 2335 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:56:03.360833 kubelet[2335]: I0130 12:56:03.360786 2335 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:56:03.361015 kubelet[2335]: I0130 12:56:03.360824 2335 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:56:03.361113 kubelet[2335]: I0130 12:56:03.361101 2335 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:56:03.361113 kubelet[2335]: I0130 12:56:03.361111 2335 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:56:03.361411 kubelet[2335]: I0130 12:56:03.361382 2335 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:03.363059 kubelet[2335]: I0130 12:56:03.362964 2335 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:56:03.363059 kubelet[2335]: I0130 12:56:03.362993 2335 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:56:03.364064 kubelet[2335]: I0130 12:56:03.363320 2335 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:56:03.364064 kubelet[2335]: I0130 12:56:03.363584 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:56:03.365210 kubelet[2335]: W0130 12:56:03.365095 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.365210 kubelet[2335]: E0130 12:56:03.365147 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.366500 kubelet[2335]: I0130 12:56:03.365449 2335 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:56:03.366500 kubelet[2335]: W0130 12:56:03.365778 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.366500 kubelet[2335]: I0130 12:56:03.365821 2335 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:56:03.366500 kubelet[2335]: E0130 12:56:03.365824 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.366500 kubelet[2335]: W0130 12:56:03.365926 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:56:03.367081 kubelet[2335]: I0130 12:56:03.366854 2335 server.go:1264] "Started kubelet" Jan 30 12:56:03.368434 kubelet[2335]: I0130 12:56:03.368397 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:56:03.370830 kubelet[2335]: I0130 12:56:03.370773 2335 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:56:03.371672 kubelet[2335]: I0130 12:56:03.371647 2335 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:56:03.371756 kubelet[2335]: I0130 12:56:03.371734 2335 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:56:03.371817 kubelet[2335]: I0130 12:56:03.371801 2335 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:56:03.372354 kubelet[2335]: W0130 12:56:03.372303 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.372493 kubelet[2335]: E0130 12:56:03.372362 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.372799 kubelet[2335]: I0130 12:56:03.372536 2335 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:56:03.372854 kubelet[2335]: I0130 12:56:03.372822 2335 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:56:03.374129 kubelet[2335]: I0130 12:56:03.374104 2335 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:56:03.375641 kubelet[2335]: E0130 12:56:03.369903 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f79ac973a65b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:56:03.366823352 +0000 UTC m=+1.219334155,LastTimestamp:2025-01-30 12:56:03.366823352 +0000 UTC m=+1.219334155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:56:03.377047 kubelet[2335]: E0130 12:56:03.376993 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Jan 30 12:56:03.381072 kubelet[2335]: I0130 12:56:03.377214 2335 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:56:03.381072 kubelet[2335]: I0130 12:56:03.377327 2335 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:56:03.382853 kubelet[2335]: I0130 12:56:03.381460 2335 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:56:03.383492 kubelet[2335]: E0130 12:56:03.383467 2335 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:56:03.399667 kubelet[2335]: I0130 12:56:03.399524 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:56:03.400695 kubelet[2335]: I0130 12:56:03.400675 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:56:03.401116 kubelet[2335]: I0130 12:56:03.401105 2335 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:56:03.401672 kubelet[2335]: I0130 12:56:03.401314 2335 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:56:03.401672 kubelet[2335]: E0130 12:56:03.401359 2335 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:56:03.402338 kubelet[2335]: W0130 12:56:03.402251 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.402338 kubelet[2335]: E0130 12:56:03.402296 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:03.403967 kubelet[2335]: I0130 12:56:03.403938 2335 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:56:03.403967 kubelet[2335]: I0130 12:56:03.403960 2335 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:56:03.404110 kubelet[2335]: I0130 12:56:03.403981 2335 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:03.472305 kubelet[2335]: I0130 12:56:03.472261 2335 policy_none.go:49] "None policy: Start" Jan 30 12:56:03.473358 kubelet[2335]: I0130 12:56:03.473131 2335 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:56:03.473358 kubelet[2335]: I0130 12:56:03.473155 2335 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:56:03.473956 kubelet[2335]: I0130 12:56:03.473935 2335 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:56:03.474353 kubelet[2335]: E0130 12:56:03.474322 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:56:03.477252 kubelet[2335]: I0130 12:56:03.477216 2335 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:56:03.477604 kubelet[2335]: I0130 12:56:03.477445 2335 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:56:03.477604 kubelet[2335]: I0130 12:56:03.477596 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:56:03.479981 kubelet[2335]: E0130 12:56:03.479655 2335 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 12:56:03.502798 kubelet[2335]: I0130 12:56:03.502735 2335 topology_manager.go:215] "Topology Admit Handler" podUID="2ac6ae3101da5dcf4e729526258a5b88" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:56:03.505997 kubelet[2335]: I0130 12:56:03.505398 2335 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:56:03.508343 kubelet[2335]: I0130 12:56:03.507866 2335 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:56:03.577838 kubelet[2335]: E0130 12:56:03.577703 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Jan 30 12:56:03.673275 kubelet[2335]: I0130 12:56:03.673151 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:03.673275 kubelet[2335]: I0130 12:56:03.673205 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ac6ae3101da5dcf4e729526258a5b88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ac6ae3101da5dcf4e729526258a5b88\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:03.673275 kubelet[2335]: I0130 12:56:03.673228 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ac6ae3101da5dcf4e729526258a5b88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ac6ae3101da5dcf4e729526258a5b88\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:03.673275 kubelet[2335]: I0130 12:56:03.673261 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ac6ae3101da5dcf4e729526258a5b88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ac6ae3101da5dcf4e729526258a5b88\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:03.673663 kubelet[2335]: I0130 12:56:03.673307 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:03.673663 kubelet[2335]: I0130 12:56:03.673353 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:03.673663 kubelet[2335]: I0130 12:56:03.673388 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:03.673663 kubelet[2335]: I0130 12:56:03.673415 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:03.673663 kubelet[2335]: I0130 12:56:03.673432 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:56:03.676536 kubelet[2335]: I0130 12:56:03.676153 2335 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:56:03.676536 kubelet[2335]: E0130 12:56:03.676492 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:56:03.809730 kubelet[2335]: E0130 12:56:03.809685 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:03.810352 containerd[1534]: time="2025-01-30T12:56:03.810309299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ac6ae3101da5dcf4e729526258a5b88,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:03.812557 kubelet[2335]: E0130 12:56:03.812531 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:03.812624 kubelet[2335]: E0130 12:56:03.812589 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:03.813011 containerd[1534]: time="2025-01-30T12:56:03.812968574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:03.813158 containerd[1534]: time="2025-01-30T12:56:03.812968894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:03.978691 kubelet[2335]: E0130 12:56:03.978638 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Jan 30 12:56:04.078114 kubelet[2335]: I0130 12:56:04.078087 2335 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:56:04.078436 kubelet[2335]: E0130 12:56:04.078408 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:56:04.246474 kubelet[2335]: W0130 12:56:04.246318 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.246474 kubelet[2335]: E0130 12:56:04.246372 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.339636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843215329.mount: Deactivated successfully. Jan 30 12:56:04.347544 containerd[1534]: time="2025-01-30T12:56:04.346970810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:04.350272 containerd[1534]: time="2025-01-30T12:56:04.350232616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 12:56:04.352537 containerd[1534]: time="2025-01-30T12:56:04.352498670Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:04.354433 containerd[1534]: time="2025-01-30T12:56:04.354127815Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:04.357817 containerd[1534]: time="2025-01-30T12:56:04.357776072Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:04.361293 containerd[1534]: time="2025-01-30T12:56:04.361206674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:56:04.362119 containerd[1534]: time="2025-01-30T12:56:04.362090922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:56:04.364176 containerd[1534]: time="2025-01-30T12:56:04.364116626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:04.366063 containerd[1534]: time="2025-01-30T12:56:04.365790397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.683892ms" Jan 30 12:56:04.366262 containerd[1534]: time="2025-01-30T12:56:04.366231862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.82997ms" Jan 30 12:56:04.368153 containerd[1534]: time="2025-01-30T12:56:04.368117137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.04854ms" Jan 30 12:56:04.475508 kubelet[2335]: W0130 12:56:04.471902 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.475508 kubelet[2335]: E0130 12:56:04.471974 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.549342 containerd[1534]: time="2025-01-30T12:56:04.549156164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:04.549342 containerd[1534]: time="2025-01-30T12:56:04.549206742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:04.549342 containerd[1534]: time="2025-01-30T12:56:04.549217609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.549558 containerd[1534]: time="2025-01-30T12:56:04.549303905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:04.549558 containerd[1534]: time="2025-01-30T12:56:04.549354044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:04.549558 containerd[1534]: time="2025-01-30T12:56:04.549380292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.549558 containerd[1534]: time="2025-01-30T12:56:04.549470543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.549558 containerd[1534]: time="2025-01-30T12:56:04.549298072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.553894 containerd[1534]: time="2025-01-30T12:56:04.553795380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:04.554064 containerd[1534]: time="2025-01-30T12:56:04.553865255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:04.554064 containerd[1534]: time="2025-01-30T12:56:04.553878319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.554176 containerd[1534]: time="2025-01-30T12:56:04.553987587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.601134 containerd[1534]: time="2025-01-30T12:56:04.599445004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c600a067b633dd46962224b857fd8d67baf763b7f8e5c64af3144c5a373fe1f\"" Jan 30 12:56:04.603959 kubelet[2335]: E0130 12:56:04.603274 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:04.606386 containerd[1534]: time="2025-01-30T12:56:04.606345200Z" level=info msg="CreateContainer within sandbox \"2c600a067b633dd46962224b857fd8d67baf763b7f8e5c64af3144c5a373fe1f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 12:56:04.608097 containerd[1534]: time="2025-01-30T12:56:04.608054088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"56aab3a36260cb3888d1ef3fe592f9bb131e061f372f24ee92f6b5517004cfa5\"" Jan 30 12:56:04.608837 containerd[1534]: time="2025-01-30T12:56:04.608811610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ac6ae3101da5dcf4e729526258a5b88,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f2aae92a712afbccb8744751f4142c833b3f787193994b37b8afbc95616e968\"" Jan 30 12:56:04.608928 kubelet[2335]: E0130 12:56:04.608828 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:04.609409 kubelet[2335]: E0130 12:56:04.609374 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:04.611685 containerd[1534]: time="2025-01-30T12:56:04.611646134Z" level=info msg="CreateContainer within sandbox \"56aab3a36260cb3888d1ef3fe592f9bb131e061f372f24ee92f6b5517004cfa5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 12:56:04.611811 containerd[1534]: time="2025-01-30T12:56:04.611783488Z" level=info msg="CreateContainer within sandbox \"8f2aae92a712afbccb8744751f4142c833b3f787193994b37b8afbc95616e968\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 12:56:04.636468 containerd[1534]: time="2025-01-30T12:56:04.636410954Z" level=info msg="CreateContainer within sandbox \"8f2aae92a712afbccb8744751f4142c833b3f787193994b37b8afbc95616e968\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bfabb4f84a6b13986a3789585d058ec8f2ca77d01bcc4ea0bd8f5ae3da99e106\"" Jan 30 12:56:04.637206 containerd[1534]: time="2025-01-30T12:56:04.637175747Z" level=info msg="StartContainer for \"bfabb4f84a6b13986a3789585d058ec8f2ca77d01bcc4ea0bd8f5ae3da99e106\"" Jan 30 12:56:04.640657 containerd[1534]: time="2025-01-30T12:56:04.640616217Z" level=info msg="CreateContainer within sandbox \"56aab3a36260cb3888d1ef3fe592f9bb131e061f372f24ee92f6b5517004cfa5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28fd02475443e92282f1cc8f1d8eed39efadfd58b88739f2f5129539658599a5\"" Jan 30 12:56:04.640733 containerd[1534]: time="2025-01-30T12:56:04.640699316Z" level=info msg="CreateContainer within sandbox \"2c600a067b633dd46962224b857fd8d67baf763b7f8e5c64af3144c5a373fe1f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf733d7ab30095be014f5326f7263019c2cfd9c4c82e849ce857428986c9e0c6\"" Jan 30 12:56:04.641149 containerd[1534]: time="2025-01-30T12:56:04.641111456Z" level=info msg="StartContainer for \"28fd02475443e92282f1cc8f1d8eed39efadfd58b88739f2f5129539658599a5\"" Jan 30 12:56:04.641269 containerd[1534]: time="2025-01-30T12:56:04.641119087Z" level=info msg="StartContainer for \"cf733d7ab30095be014f5326f7263019c2cfd9c4c82e849ce857428986c9e0c6\"" Jan 30 12:56:04.715216 kubelet[2335]: W0130 12:56:04.715123 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.715216 kubelet[2335]: E0130 12:56:04.715190 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.738859 containerd[1534]: time="2025-01-30T12:56:04.738812025Z" level=info msg="StartContainer for \"cf733d7ab30095be014f5326f7263019c2cfd9c4c82e849ce857428986c9e0c6\" returns successfully" Jan 30 12:56:04.738973 containerd[1534]: time="2025-01-30T12:56:04.738950937Z" level=info msg="StartContainer for \"28fd02475443e92282f1cc8f1d8eed39efadfd58b88739f2f5129539658599a5\" returns successfully" Jan 30 12:56:04.739000 containerd[1534]: time="2025-01-30T12:56:04.738984536Z" level=info msg="StartContainer for \"bfabb4f84a6b13986a3789585d058ec8f2ca77d01bcc4ea0bd8f5ae3da99e106\" returns successfully" Jan 30 12:56:04.780067 kubelet[2335]: E0130 12:56:04.779641 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Jan 30 12:56:04.857900 kubelet[2335]: W0130 12:56:04.857702 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.857900 kubelet[2335]: E0130 12:56:04.857784 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:56:04.881627 kubelet[2335]: I0130 12:56:04.881590 2335 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:56:04.882114 kubelet[2335]: E0130 12:56:04.882005 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:56:05.411531 kubelet[2335]: E0130 12:56:05.411341 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:05.414586 kubelet[2335]: E0130 12:56:05.414555 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:05.414878 kubelet[2335]: E0130 12:56:05.414802 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:06.384323 kubelet[2335]: E0130 12:56:06.384279 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 12:56:06.416855 kubelet[2335]: E0130 12:56:06.416825 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:06.483715 kubelet[2335]: I0130 12:56:06.483666 2335 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:56:06.486156 kubelet[2335]: E0130 12:56:06.486127 2335 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 30 12:56:06.502088 kubelet[2335]: I0130 12:56:06.502043 2335 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:56:06.515423 kubelet[2335]: E0130 12:56:06.515366 2335 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:06.615956 kubelet[2335]: E0130 12:56:06.615922 2335 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:06.717060 kubelet[2335]: E0130 12:56:06.717005 2335 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:06.817977 kubelet[2335]: E0130 12:56:06.817925 2335 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:06.918805 kubelet[2335]: E0130 12:56:06.918756 2335 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:56:07.366636 kubelet[2335]: I0130 12:56:07.366598 2335 apiserver.go:52] "Watching apiserver" Jan 30 12:56:07.372260 kubelet[2335]: I0130 12:56:07.372201 2335 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:56:08.631426 systemd[1]: Reloading requested from client PID 2616 ('systemctl') (unit session-7.scope)... Jan 30 12:56:08.631441 systemd[1]: Reloading... Jan 30 12:56:08.696144 zram_generator::config[2655]: No configuration found. Jan 30 12:56:08.861449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:56:08.918446 systemd[1]: Reloading finished in 286 ms. Jan 30 12:56:08.950875 kubelet[2335]: I0130 12:56:08.950832 2335 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:56:08.951356 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:08.967598 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:56:08.968007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:08.978315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:09.075603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:09.082454 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:56:09.142354 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:09.142354 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:56:09.142354 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:09.142714 kubelet[2707]: I0130 12:56:09.142381 2707 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:56:09.146753 kubelet[2707]: I0130 12:56:09.146551 2707 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:56:09.146753 kubelet[2707]: I0130 12:56:09.146580 2707 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:56:09.147005 kubelet[2707]: I0130 12:56:09.146992 2707 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:56:09.148482 kubelet[2707]: I0130 12:56:09.148454 2707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 12:56:09.150008 kubelet[2707]: I0130 12:56:09.149985 2707 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:56:09.155737 kubelet[2707]: I0130 12:56:09.155675 2707 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:56:09.156361 kubelet[2707]: I0130 12:56:09.156323 2707 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:56:09.156748 kubelet[2707]: I0130 12:56:09.156425 2707 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:56:09.156748 kubelet[2707]: I0130 12:56:09.156604 2707 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:56:09.156748 kubelet[2707]: I0130 12:56:09.156614 2707 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:56:09.156748 kubelet[2707]: I0130 12:56:09.156648 2707 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:09.157480 kubelet[2707]: I0130 12:56:09.157454 2707 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:56:09.157593 kubelet[2707]: I0130 12:56:09.157571 2707 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:56:09.157676 kubelet[2707]: I0130 12:56:09.157668 2707 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:56:09.157934 kubelet[2707]: I0130 12:56:09.157917 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:56:09.159336 kubelet[2707]: I0130 12:56:09.159280 2707 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:56:09.159517 kubelet[2707]: I0130 12:56:09.159499 2707 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:56:09.160497 kubelet[2707]: I0130 12:56:09.160461 2707 server.go:1264] "Started kubelet" Jan 30 12:56:09.167366 kubelet[2707]: I0130 12:56:09.167328 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:56:09.169392 kubelet[2707]: E0130 12:56:09.168285 2707 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:56:09.170188 kubelet[2707]: I0130 12:56:09.169540 2707 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:56:09.171006 kubelet[2707]: I0130 12:56:09.170836 2707 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:56:09.172140 kubelet[2707]: I0130 12:56:09.172107 2707 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:56:09.172302 kubelet[2707]: I0130 12:56:09.172288 2707 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:56:09.172856 kubelet[2707]: I0130 12:56:09.172448 2707 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:56:09.172908 kubelet[2707]: I0130 12:56:09.172820 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:56:09.174065 kubelet[2707]: I0130 12:56:09.173215 2707 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:56:09.176987 kubelet[2707]: I0130 12:56:09.176953 2707 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:56:09.179078 kubelet[2707]: I0130 12:56:09.178692 2707 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:56:09.181377 kubelet[2707]: I0130 12:56:09.180508 2707 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:56:09.190108 kubelet[2707]: I0130 12:56:09.189454 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:56:09.190879 kubelet[2707]: I0130 12:56:09.190849 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:56:09.190941 kubelet[2707]: I0130 12:56:09.190895 2707 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:56:09.190941 kubelet[2707]: I0130 12:56:09.190915 2707 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:56:09.190985 kubelet[2707]: E0130 12:56:09.190965 2707 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:56:09.225900 kubelet[2707]: I0130 12:56:09.225845 2707 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:56:09.225900 kubelet[2707]: I0130 12:56:09.225869 2707 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:56:09.225900 kubelet[2707]: I0130 12:56:09.225890 2707 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:09.226112 kubelet[2707]: I0130 12:56:09.226076 2707 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 12:56:09.226112 kubelet[2707]: I0130 12:56:09.226089 2707 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 12:56:09.226112 kubelet[2707]: I0130 12:56:09.226107 2707 policy_none.go:49] "None policy: Start" Jan 30 12:56:09.226977 kubelet[2707]: I0130 12:56:09.226705 2707 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:56:09.226977 kubelet[2707]: I0130 12:56:09.226741 2707 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:56:09.226977 kubelet[2707]: I0130 12:56:09.226907 2707 state_mem.go:75] "Updated machine memory state" Jan 30 12:56:09.228070 kubelet[2707]: I0130 12:56:09.228043 2707 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:56:09.229444 kubelet[2707]: I0130 12:56:09.228218 2707 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:56:09.229444 kubelet[2707]: I0130 12:56:09.228319 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:56:09.276775 kubelet[2707]: I0130 12:56:09.276740 2707 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:56:09.283951 kubelet[2707]: I0130 12:56:09.283908 2707 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 12:56:09.284106 kubelet[2707]: I0130 12:56:09.284043 2707 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:56:09.291881 kubelet[2707]: I0130 12:56:09.291842 2707 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:56:09.292337 kubelet[2707]: I0130 12:56:09.292317 2707 topology_manager.go:215] "Topology Admit Handler" podUID="2ac6ae3101da5dcf4e729526258a5b88" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:56:09.292764 kubelet[2707]: I0130 12:56:09.292689 2707 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:56:09.373514 kubelet[2707]: I0130 12:56:09.373462 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ac6ae3101da5dcf4e729526258a5b88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ac6ae3101da5dcf4e729526258a5b88\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:09.373514 kubelet[2707]: I0130 12:56:09.373507 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:09.373660 kubelet[2707]: I0130 12:56:09.373533 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:09.373660 kubelet[2707]: I0130 12:56:09.373549 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:09.373660 kubelet[2707]: I0130 12:56:09.373566 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:56:09.373660 kubelet[2707]: I0130 12:56:09.373580 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ac6ae3101da5dcf4e729526258a5b88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ac6ae3101da5dcf4e729526258a5b88\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:09.373660 kubelet[2707]: I0130 12:56:09.373593 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ac6ae3101da5dcf4e729526258a5b88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ac6ae3101da5dcf4e729526258a5b88\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:09.373789 kubelet[2707]: I0130 12:56:09.373608 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:09.373789 kubelet[2707]: I0130 12:56:09.373649 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:56:09.631451 kubelet[2707]: E0130 12:56:09.630081 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:09.631451 kubelet[2707]: E0130 12:56:09.630518 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:09.631451 kubelet[2707]: E0130 12:56:09.630780 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:10.159475 kubelet[2707]: I0130 12:56:10.158644 2707 apiserver.go:52] "Watching apiserver" Jan 30 12:56:10.173064 kubelet[2707]: I0130 12:56:10.172633 2707 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:56:10.205713 kubelet[2707]: E0130 12:56:10.205466 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:10.220970 kubelet[2707]: E0130 12:56:10.220906 2707 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 12:56:10.221906 kubelet[2707]: E0130 12:56:10.221152 2707 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 12:56:10.221906 kubelet[2707]: E0130 12:56:10.221317 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:10.221906 kubelet[2707]: E0130 12:56:10.221740 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:10.253862 kubelet[2707]: I0130 12:56:10.253570 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.253513334 podStartE2EDuration="1.253513334s" podCreationTimestamp="2025-01-30 12:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:10.253320399 +0000 UTC m=+1.167230560" watchObservedRunningTime="2025-01-30 12:56:10.253513334 +0000 UTC m=+1.167423495" Jan 30 12:56:10.264185 kubelet[2707]: I0130 12:56:10.263772 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2637563250000001 podStartE2EDuration="1.263756325s" podCreationTimestamp="2025-01-30 12:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:10.263313686 +0000 UTC m=+1.177223847" watchObservedRunningTime="2025-01-30 12:56:10.263756325 +0000 UTC m=+1.177666446" Jan 30 12:56:10.277071 kubelet[2707]: I0130 12:56:10.276400 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.27638466 podStartE2EDuration="1.27638466s" podCreationTimestamp="2025-01-30 12:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:10.276090739 +0000 UTC m=+1.190000900" watchObservedRunningTime="2025-01-30 12:56:10.27638466 +0000 UTC m=+1.190294820" Jan 30 12:56:11.216569 kubelet[2707]: E0130 12:56:11.215206 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:11.219188 kubelet[2707]: E0130 12:56:11.219136 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:12.210927 kubelet[2707]: E0130 12:56:12.210714 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:13.899170 sudo[1745]: pam_unix(sudo:session): session closed for user root Jan 30 12:56:13.901505 sshd[1738]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:13.905656 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:55950.service: Deactivated successfully. Jan 30 12:56:13.907820 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:56:13.910008 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:56:13.911124 systemd-logind[1515]: Removed session 7. Jan 30 12:56:18.095569 kubelet[2707]: E0130 12:56:18.095154 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:18.231901 kubelet[2707]: E0130 12:56:18.231848 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:19.318951 kubelet[2707]: E0130 12:56:19.318521 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:20.235256 kubelet[2707]: E0130 12:56:20.234872 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:20.809875 kubelet[2707]: E0130 12:56:20.809416 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:23.270989 kubelet[2707]: I0130 12:56:23.270934 2707 topology_manager.go:215] "Topology Admit Handler" podUID="21baeb0a-d4b1-4659-9c74-0b9e1b866e1b" podNamespace="kube-system" podName="kube-proxy-49kv7" Jan 30 12:56:23.297501 kubelet[2707]: I0130 12:56:23.297147 2707 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 12:56:23.338357 containerd[1534]: time="2025-01-30T12:56:23.338292807Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:56:23.340504 kubelet[2707]: I0130 12:56:23.338979 2707 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 12:56:23.366525 kubelet[2707]: I0130 12:56:23.366479 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21baeb0a-d4b1-4659-9c74-0b9e1b866e1b-xtables-lock\") pod \"kube-proxy-49kv7\" (UID: \"21baeb0a-d4b1-4659-9c74-0b9e1b866e1b\") " pod="kube-system/kube-proxy-49kv7" Jan 30 12:56:23.366754 kubelet[2707]: I0130 12:56:23.366739 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21baeb0a-d4b1-4659-9c74-0b9e1b866e1b-kube-proxy\") pod \"kube-proxy-49kv7\" (UID: \"21baeb0a-d4b1-4659-9c74-0b9e1b866e1b\") " pod="kube-system/kube-proxy-49kv7" Jan 30 12:56:23.366836 kubelet[2707]: I0130 12:56:23.366824 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21baeb0a-d4b1-4659-9c74-0b9e1b866e1b-lib-modules\") pod \"kube-proxy-49kv7\" (UID: \"21baeb0a-d4b1-4659-9c74-0b9e1b866e1b\") " pod="kube-system/kube-proxy-49kv7" Jan 30 12:56:23.366901 kubelet[2707]: I0130 12:56:23.366889 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwj25\" (UniqueName: \"kubernetes.io/projected/21baeb0a-d4b1-4659-9c74-0b9e1b866e1b-kube-api-access-hwj25\") pod \"kube-proxy-49kv7\" (UID: \"21baeb0a-d4b1-4659-9c74-0b9e1b866e1b\") " pod="kube-system/kube-proxy-49kv7" Jan 30 12:56:23.523298 kubelet[2707]: I0130 12:56:23.523175 2707 topology_manager.go:215] "Topology Admit Handler" podUID="b8d8a51a-5107-4f3d-8e53-708da093be49" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-m99s4" Jan 30 12:56:23.570123 kubelet[2707]: I0130 12:56:23.570073 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxk76\" (UniqueName: \"kubernetes.io/projected/b8d8a51a-5107-4f3d-8e53-708da093be49-kube-api-access-zxk76\") pod \"tigera-operator-7bc55997bb-m99s4\" (UID: \"b8d8a51a-5107-4f3d-8e53-708da093be49\") " pod="tigera-operator/tigera-operator-7bc55997bb-m99s4" Jan 30 12:56:23.570123 kubelet[2707]: I0130 12:56:23.570121 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8d8a51a-5107-4f3d-8e53-708da093be49-var-lib-calico\") pod \"tigera-operator-7bc55997bb-m99s4\" (UID: \"b8d8a51a-5107-4f3d-8e53-708da093be49\") " pod="tigera-operator/tigera-operator-7bc55997bb-m99s4" Jan 30 12:56:23.578320 kubelet[2707]: E0130 12:56:23.578282 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:23.579110 containerd[1534]: time="2025-01-30T12:56:23.579073140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49kv7,Uid:21baeb0a-d4b1-4659-9c74-0b9e1b866e1b,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:23.601554 containerd[1534]: time="2025-01-30T12:56:23.600750920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:23.601554 containerd[1534]: time="2025-01-30T12:56:23.601374701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:23.601554 containerd[1534]: time="2025-01-30T12:56:23.601415342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:23.601829 containerd[1534]: time="2025-01-30T12:56:23.601535106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:23.643191 containerd[1534]: time="2025-01-30T12:56:23.643134285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49kv7,Uid:21baeb0a-d4b1-4659-9c74-0b9e1b866e1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc36b323e016aabfc1fc8b40926aa2aa618373a9a4f51445f27f060001d4e1fb\"" Jan 30 12:56:23.646309 kubelet[2707]: E0130 12:56:23.646277 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:23.651981 containerd[1534]: time="2025-01-30T12:56:23.651937346Z" level=info msg="CreateContainer within sandbox \"cc36b323e016aabfc1fc8b40926aa2aa618373a9a4f51445f27f060001d4e1fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:56:23.761436 containerd[1534]: time="2025-01-30T12:56:23.761380039Z" level=info msg="CreateContainer within sandbox \"cc36b323e016aabfc1fc8b40926aa2aa618373a9a4f51445f27f060001d4e1fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f6879c11e306cfb6c25b7e5aac130748af66cef3e9ab37c7a028e854a7359a49\"" Jan 30 12:56:23.763637 containerd[1534]: time="2025-01-30T12:56:23.761991659Z" level=info msg="StartContainer for \"f6879c11e306cfb6c25b7e5aac130748af66cef3e9ab37c7a028e854a7359a49\"" Jan 30 12:56:23.823068 containerd[1534]: time="2025-01-30T12:56:23.821707576Z" level=info msg="StartContainer for \"f6879c11e306cfb6c25b7e5aac130748af66cef3e9ab37c7a028e854a7359a49\" returns successfully" Jan 30 12:56:23.835507 containerd[1534]: time="2025-01-30T12:56:23.830596520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-m99s4,Uid:b8d8a51a-5107-4f3d-8e53-708da093be49,Namespace:tigera-operator,Attempt:0,}" Jan 30 12:56:23.862433 containerd[1534]: time="2025-01-30T12:56:23.862255119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:23.863505 containerd[1534]: time="2025-01-30T12:56:23.863292115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:23.863505 containerd[1534]: time="2025-01-30T12:56:23.863318796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:23.863505 containerd[1534]: time="2025-01-30T12:56:23.863430519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:23.923343 containerd[1534]: time="2025-01-30T12:56:23.923277601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-m99s4,Uid:b8d8a51a-5107-4f3d-8e53-708da093be49,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"59172592e187fc6d6006f79444d2f8b54b9da17d6c440d7bba7e7a83a2d6a48c\"" Jan 30 12:56:23.934185 containerd[1534]: time="2025-01-30T12:56:23.934052048Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 12:56:24.257269 kubelet[2707]: E0130 12:56:24.257222 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:24.287412 kubelet[2707]: I0130 12:56:24.287160 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-49kv7" podStartSLOduration=1.287139686 podStartE2EDuration="1.287139686s" podCreationTimestamp="2025-01-30 12:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:24.286789714 +0000 UTC m=+15.200699875" watchObservedRunningTime="2025-01-30 12:56:24.287139686 +0000 UTC m=+15.201049847" Jan 30 12:56:25.075653 update_engine[1518]: I20250130 12:56:25.075438 1518 update_attempter.cc:509] Updating boot flags... Jan 30 12:56:25.115076 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2989) Jan 30 12:56:27.594250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406443589.mount: Deactivated successfully. Jan 30 12:56:27.910615 containerd[1534]: time="2025-01-30T12:56:27.909574690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:27.915491 containerd[1534]: time="2025-01-30T12:56:27.915433334Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 12:56:27.930670 containerd[1534]: time="2025-01-30T12:56:27.930129824Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.996037655s" Jan 30 12:56:27.930670 containerd[1534]: time="2025-01-30T12:56:27.930181466Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 12:56:27.931957 containerd[1534]: time="2025-01-30T12:56:27.931910034Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:27.933318 containerd[1534]: time="2025-01-30T12:56:27.933268272Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:27.937457 containerd[1534]: time="2025-01-30T12:56:27.937394747Z" level=info msg="CreateContainer within sandbox \"59172592e187fc6d6006f79444d2f8b54b9da17d6c440d7bba7e7a83a2d6a48c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 12:56:27.949059 containerd[1534]: time="2025-01-30T12:56:27.948924949Z" level=info msg="CreateContainer within sandbox \"59172592e187fc6d6006f79444d2f8b54b9da17d6c440d7bba7e7a83a2d6a48c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b96ff2c24ef75e9a50b45875df4226a09f4076149d3055e008d499c7f2db6c3e\"" Jan 30 12:56:27.949551 containerd[1534]: time="2025-01-30T12:56:27.949530926Z" level=info msg="StartContainer for \"b96ff2c24ef75e9a50b45875df4226a09f4076149d3055e008d499c7f2db6c3e\"" Jan 30 12:56:28.024610 containerd[1534]: time="2025-01-30T12:56:28.024550830Z" level=info msg="StartContainer for \"b96ff2c24ef75e9a50b45875df4226a09f4076149d3055e008d499c7f2db6c3e\" returns successfully" Jan 30 12:56:28.282444 kubelet[2707]: I0130 12:56:28.282375 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-m99s4" podStartSLOduration=1.27967584 podStartE2EDuration="5.28235645s" podCreationTimestamp="2025-01-30 12:56:23 +0000 UTC" firstStartedPulling="2025-01-30 12:56:23.932389392 +0000 UTC m=+14.846299553" lastFinishedPulling="2025-01-30 12:56:27.935069922 +0000 UTC m=+18.848980163" observedRunningTime="2025-01-30 12:56:28.282118563 +0000 UTC m=+19.196028724" watchObservedRunningTime="2025-01-30 12:56:28.28235645 +0000 UTC m=+19.196266611" Jan 30 12:56:32.044068 kubelet[2707]: I0130 12:56:32.031249 2707 topology_manager.go:215] "Topology Admit Handler" podUID="7fbaedd5-97f4-4401-a593-703c932af286" podNamespace="calico-system" podName="calico-typha-5fffdd9b97-bhdgk" Jan 30 12:56:32.086774 kubelet[2707]: I0130 12:56:32.086724 2707 topology_manager.go:215] "Topology Admit Handler" podUID="0f9b1e42-b8df-4280-a89e-77312805efe6" podNamespace="calico-system" podName="calico-node-9mfhh" Jan 30 12:56:32.124161 kubelet[2707]: I0130 12:56:32.123933 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-var-run-calico\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124161 kubelet[2707]: I0130 12:56:32.123982 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-cni-net-dir\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124161 kubelet[2707]: I0130 12:56:32.124006 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7fbaedd5-97f4-4401-a593-703c932af286-typha-certs\") pod \"calico-typha-5fffdd9b97-bhdgk\" (UID: \"7fbaedd5-97f4-4401-a593-703c932af286\") " pod="calico-system/calico-typha-5fffdd9b97-bhdgk" Jan 30 12:56:32.124161 kubelet[2707]: I0130 12:56:32.124039 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-policysync\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124161 kubelet[2707]: I0130 12:56:32.124059 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-xtables-lock\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124394 kubelet[2707]: I0130 12:56:32.124075 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0f9b1e42-b8df-4280-a89e-77312805efe6-node-certs\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124394 kubelet[2707]: I0130 12:56:32.124091 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-var-lib-calico\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124394 kubelet[2707]: I0130 12:56:32.124109 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmf6s\" (UniqueName: \"kubernetes.io/projected/7fbaedd5-97f4-4401-a593-703c932af286-kube-api-access-fmf6s\") pod \"calico-typha-5fffdd9b97-bhdgk\" (UID: \"7fbaedd5-97f4-4401-a593-703c932af286\") " pod="calico-system/calico-typha-5fffdd9b97-bhdgk" Jan 30 12:56:32.124394 kubelet[2707]: I0130 12:56:32.124127 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-cni-log-dir\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124394 kubelet[2707]: I0130 12:56:32.124161 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-flexvol-driver-host\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124528 kubelet[2707]: I0130 12:56:32.124208 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5nh2\" (UniqueName: \"kubernetes.io/projected/0f9b1e42-b8df-4280-a89e-77312805efe6-kube-api-access-g5nh2\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124528 kubelet[2707]: I0130 12:56:32.124232 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-lib-modules\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124528 kubelet[2707]: I0130 12:56:32.124259 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9b1e42-b8df-4280-a89e-77312805efe6-tigera-ca-bundle\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124528 kubelet[2707]: I0130 12:56:32.124278 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0f9b1e42-b8df-4280-a89e-77312805efe6-cni-bin-dir\") pod \"calico-node-9mfhh\" (UID: \"0f9b1e42-b8df-4280-a89e-77312805efe6\") " pod="calico-system/calico-node-9mfhh" Jan 30 12:56:32.124528 kubelet[2707]: I0130 12:56:32.124295 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fbaedd5-97f4-4401-a593-703c932af286-tigera-ca-bundle\") pod \"calico-typha-5fffdd9b97-bhdgk\" (UID: \"7fbaedd5-97f4-4401-a593-703c932af286\") " pod="calico-system/calico-typha-5fffdd9b97-bhdgk" Jan 30 12:56:32.223809 kubelet[2707]: I0130 12:56:32.223266 2707 topology_manager.go:215] "Topology Admit Handler" podUID="41953e49-b598-4079-bbdf-ef9c599ebe81" podNamespace="calico-system" podName="csi-node-driver-xwjj9" Jan 30 12:56:32.223809 kubelet[2707]: E0130 12:56:32.223575 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwjj9" podUID="41953e49-b598-4079-bbdf-ef9c599ebe81" Jan 30 12:56:32.238331 kubelet[2707]: E0130 12:56:32.238167 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.238331 kubelet[2707]: W0130 12:56:32.238193 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.238331 kubelet[2707]: E0130 12:56:32.238216 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.241635 kubelet[2707]: E0130 12:56:32.240297 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.241635 kubelet[2707]: W0130 12:56:32.240313 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.241635 kubelet[2707]: E0130 12:56:32.240326 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.249148 kubelet[2707]: E0130 12:56:32.249121 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.250360 kubelet[2707]: W0130 12:56:32.250328 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.250470 kubelet[2707]: E0130 12:56:32.250457 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.262480 kubelet[2707]: E0130 12:56:32.262450 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.262680 kubelet[2707]: W0130 12:56:32.262619 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.262680 kubelet[2707]: E0130 12:56:32.262646 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.322319 kubelet[2707]: E0130 12:56:32.322215 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.322319 kubelet[2707]: W0130 12:56:32.322241 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.322319 kubelet[2707]: E0130 12:56:32.322262 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.322505 kubelet[2707]: E0130 12:56:32.322468 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.322505 kubelet[2707]: W0130 12:56:32.322476 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.322505 kubelet[2707]: E0130 12:56:32.322484 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.322661 kubelet[2707]: E0130 12:56:32.322651 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.322661 kubelet[2707]: W0130 12:56:32.322661 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.322977 kubelet[2707]: E0130 12:56:32.322670 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.322977 kubelet[2707]: E0130 12:56:32.322844 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.322977 kubelet[2707]: W0130 12:56:32.322853 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.322977 kubelet[2707]: E0130 12:56:32.322862 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.323553 kubelet[2707]: E0130 12:56:32.323038 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.323553 kubelet[2707]: W0130 12:56:32.323048 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.323553 kubelet[2707]: E0130 12:56:32.323056 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.323553 kubelet[2707]: E0130 12:56:32.323231 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.323553 kubelet[2707]: W0130 12:56:32.323241 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.323553 kubelet[2707]: E0130 12:56:32.323250 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.323553 kubelet[2707]: E0130 12:56:32.323428 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.323553 kubelet[2707]: W0130 12:56:32.323436 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.323553 kubelet[2707]: E0130 12:56:32.323443 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.323871 kubelet[2707]: E0130 12:56:32.323608 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.323871 kubelet[2707]: W0130 12:56:32.323615 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.323871 kubelet[2707]: E0130 12:56:32.323622 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.323871 kubelet[2707]: E0130 12:56:32.323770 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.323871 kubelet[2707]: W0130 12:56:32.323783 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.323871 kubelet[2707]: E0130 12:56:32.323791 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324053 kubelet[2707]: E0130 12:56:32.323921 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324053 kubelet[2707]: W0130 12:56:32.323928 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324053 kubelet[2707]: E0130 12:56:32.323935 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324157 kubelet[2707]: E0130 12:56:32.324080 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324157 kubelet[2707]: W0130 12:56:32.324087 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324157 kubelet[2707]: E0130 12:56:32.324094 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324229 kubelet[2707]: E0130 12:56:32.324216 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324229 kubelet[2707]: W0130 12:56:32.324226 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324294 kubelet[2707]: E0130 12:56:32.324234 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324367 kubelet[2707]: E0130 12:56:32.324358 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324367 kubelet[2707]: W0130 12:56:32.324367 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324421 kubelet[2707]: E0130 12:56:32.324374 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324502 kubelet[2707]: E0130 12:56:32.324493 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324536 kubelet[2707]: W0130 12:56:32.324501 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324536 kubelet[2707]: E0130 12:56:32.324511 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324638 kubelet[2707]: E0130 12:56:32.324628 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324638 kubelet[2707]: W0130 12:56:32.324638 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324696 kubelet[2707]: E0130 12:56:32.324645 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324774 kubelet[2707]: E0130 12:56:32.324764 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324774 kubelet[2707]: W0130 12:56:32.324774 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324829 kubelet[2707]: E0130 12:56:32.324781 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.324926 kubelet[2707]: E0130 12:56:32.324916 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.324926 kubelet[2707]: W0130 12:56:32.324926 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.324987 kubelet[2707]: E0130 12:56:32.324934 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.325078 kubelet[2707]: E0130 12:56:32.325068 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.325078 kubelet[2707]: W0130 12:56:32.325077 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.325153 kubelet[2707]: E0130 12:56:32.325085 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.325217 kubelet[2707]: E0130 12:56:32.325208 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.325217 kubelet[2707]: W0130 12:56:32.325217 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.325271 kubelet[2707]: E0130 12:56:32.325224 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.325349 kubelet[2707]: E0130 12:56:32.325340 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.325349 kubelet[2707]: W0130 12:56:32.325349 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.325400 kubelet[2707]: E0130 12:56:32.325356 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.326713 kubelet[2707]: E0130 12:56:32.326663 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.326713 kubelet[2707]: W0130 12:56:32.326680 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.326713 kubelet[2707]: E0130 12:56:32.326692 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.326969 kubelet[2707]: I0130 12:56:32.326860 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hstn\" (UniqueName: \"kubernetes.io/projected/41953e49-b598-4079-bbdf-ef9c599ebe81-kube-api-access-6hstn\") pod \"csi-node-driver-xwjj9\" (UID: \"41953e49-b598-4079-bbdf-ef9c599ebe81\") " pod="calico-system/csi-node-driver-xwjj9" Jan 30 12:56:32.327148 kubelet[2707]: E0130 12:56:32.327132 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.327214 kubelet[2707]: W0130 12:56:32.327198 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.327321 kubelet[2707]: E0130 12:56:32.327264 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.327321 kubelet[2707]: I0130 12:56:32.327284 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41953e49-b598-4079-bbdf-ef9c599ebe81-kubelet-dir\") pod \"csi-node-driver-xwjj9\" (UID: \"41953e49-b598-4079-bbdf-ef9c599ebe81\") " pod="calico-system/csi-node-driver-xwjj9" Jan 30 12:56:32.327602 kubelet[2707]: E0130 12:56:32.327584 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.327758 kubelet[2707]: W0130 12:56:32.327655 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.327758 kubelet[2707]: E0130 12:56:32.327679 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.327758 kubelet[2707]: I0130 12:56:32.327698 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/41953e49-b598-4079-bbdf-ef9c599ebe81-varrun\") pod \"csi-node-driver-xwjj9\" (UID: \"41953e49-b598-4079-bbdf-ef9c599ebe81\") " pod="calico-system/csi-node-driver-xwjj9" Jan 30 12:56:32.328020 kubelet[2707]: E0130 12:56:32.328006 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.328020 kubelet[2707]: W0130 12:56:32.328048 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.328229 kubelet[2707]: E0130 12:56:32.328154 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.328229 kubelet[2707]: I0130 12:56:32.328178 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41953e49-b598-4079-bbdf-ef9c599ebe81-registration-dir\") pod \"csi-node-driver-xwjj9\" (UID: \"41953e49-b598-4079-bbdf-ef9c599ebe81\") " pod="calico-system/csi-node-driver-xwjj9" Jan 30 12:56:32.328549 kubelet[2707]: E0130 12:56:32.328450 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.328549 kubelet[2707]: W0130 12:56:32.328464 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.328549 kubelet[2707]: E0130 12:56:32.328529 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.328644 kubelet[2707]: I0130 12:56:32.328562 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41953e49-b598-4079-bbdf-ef9c599ebe81-socket-dir\") pod \"csi-node-driver-xwjj9\" (UID: \"41953e49-b598-4079-bbdf-ef9c599ebe81\") " pod="calico-system/csi-node-driver-xwjj9" Jan 30 12:56:32.328802 kubelet[2707]: E0130 12:56:32.328714 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.328802 kubelet[2707]: W0130 12:56:32.328725 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.328802 kubelet[2707]: E0130 12:56:32.328780 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.329125 kubelet[2707]: E0130 12:56:32.329015 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.329125 kubelet[2707]: W0130 12:56:32.329048 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.329125 kubelet[2707]: E0130 12:56:32.329108 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.329435 kubelet[2707]: E0130 12:56:32.329348 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.329435 kubelet[2707]: W0130 12:56:32.329360 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.329435 kubelet[2707]: E0130 12:56:32.329424 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.329745 kubelet[2707]: E0130 12:56:32.329657 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.329745 kubelet[2707]: W0130 12:56:32.329669 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.329745 kubelet[2707]: E0130 12:56:32.329726 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.330050 kubelet[2707]: E0130 12:56:32.329944 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.330050 kubelet[2707]: W0130 12:56:32.329956 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.330050 kubelet[2707]: E0130 12:56:32.330006 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.330286 kubelet[2707]: E0130 12:56:32.330188 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.330286 kubelet[2707]: W0130 12:56:32.330199 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.330286 kubelet[2707]: E0130 12:56:32.330209 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.330554 kubelet[2707]: E0130 12:56:32.330515 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.330554 kubelet[2707]: W0130 12:56:32.330527 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.330554 kubelet[2707]: E0130 12:56:32.330536 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.330831 kubelet[2707]: E0130 12:56:32.330795 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.330831 kubelet[2707]: W0130 12:56:32.330809 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.330831 kubelet[2707]: E0130 12:56:32.330819 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.331223 kubelet[2707]: E0130 12:56:32.331125 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.331223 kubelet[2707]: W0130 12:56:32.331139 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.331223 kubelet[2707]: E0130 12:56:32.331148 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.331489 kubelet[2707]: E0130 12:56:32.331438 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.331489 kubelet[2707]: W0130 12:56:32.331451 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.331489 kubelet[2707]: E0130 12:56:32.331461 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.335503 kubelet[2707]: E0130 12:56:32.335479 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:32.336122 containerd[1534]: time="2025-01-30T12:56:32.336005370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fffdd9b97-bhdgk,Uid:7fbaedd5-97f4-4401-a593-703c932af286,Namespace:calico-system,Attempt:0,}" Jan 30 12:56:32.365656 containerd[1534]: time="2025-01-30T12:56:32.365298058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:32.365656 containerd[1534]: time="2025-01-30T12:56:32.365394180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:32.365656 containerd[1534]: time="2025-01-30T12:56:32.365418541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:32.365656 containerd[1534]: time="2025-01-30T12:56:32.365513703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:32.391307 kubelet[2707]: E0130 12:56:32.391272 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:32.392316 containerd[1534]: time="2025-01-30T12:56:32.391941368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mfhh,Uid:0f9b1e42-b8df-4280-a89e-77312805efe6,Namespace:calico-system,Attempt:0,}" Jan 30 12:56:32.406393 containerd[1534]: time="2025-01-30T12:56:32.406352527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fffdd9b97-bhdgk,Uid:7fbaedd5-97f4-4401-a593-703c932af286,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2eee0c2d83ef025b083a3216d741a93b6e3cb786357cd3ac7a46a9ec1087de3\"" Jan 30 12:56:32.407719 kubelet[2707]: E0130 12:56:32.407693 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:32.409929 containerd[1534]: time="2025-01-30T12:56:32.409891085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 12:56:32.435022 kubelet[2707]: E0130 12:56:32.433574 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.435022 kubelet[2707]: W0130 12:56:32.433601 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.435022 kubelet[2707]: E0130 12:56:32.433620 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.435022 kubelet[2707]: E0130 12:56:32.433982 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.435022 kubelet[2707]: W0130 12:56:32.433996 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.435022 kubelet[2707]: E0130 12:56:32.434008 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.435022 kubelet[2707]: E0130 12:56:32.434259 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.435022 kubelet[2707]: W0130 12:56:32.434269 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.435022 kubelet[2707]: E0130 12:56:32.434279 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.436887 kubelet[2707]: E0130 12:56:32.435975 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.436887 kubelet[2707]: W0130 12:56:32.435997 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.436887 kubelet[2707]: E0130 12:56:32.436011 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.437150 kubelet[2707]: E0130 12:56:32.437037 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.437150 kubelet[2707]: W0130 12:56:32.437053 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.437150 kubelet[2707]: E0130 12:56:32.437076 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.437232 kubelet[2707]: E0130 12:56:32.437224 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.437232 kubelet[2707]: W0130 12:56:32.437231 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.437274 kubelet[2707]: E0130 12:56:32.437240 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.438785 kubelet[2707]: E0130 12:56:32.437421 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.438785 kubelet[2707]: W0130 12:56:32.437434 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.438785 kubelet[2707]: E0130 12:56:32.437461 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.439303 kubelet[2707]: E0130 12:56:32.438801 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.439303 kubelet[2707]: W0130 12:56:32.438816 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.439303 kubelet[2707]: E0130 12:56:32.439136 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.439303 kubelet[2707]: W0130 12:56:32.439149 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.439303 kubelet[2707]: E0130 12:56:32.439202 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.439303 kubelet[2707]: E0130 12:56:32.439233 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.439679 kubelet[2707]: E0130 12:56:32.439337 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.439679 kubelet[2707]: W0130 12:56:32.439346 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.439679 kubelet[2707]: E0130 12:56:32.439502 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.440132 kubelet[2707]: E0130 12:56:32.440113 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.440132 kubelet[2707]: W0130 12:56:32.440128 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.440434 kubelet[2707]: E0130 12:56:32.440218 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.444863 kubelet[2707]: E0130 12:56:32.444818 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.444863 kubelet[2707]: W0130 12:56:32.444846 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.445149 kubelet[2707]: E0130 12:56:32.444980 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.446238 kubelet[2707]: E0130 12:56:32.446133 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.446238 kubelet[2707]: W0130 12:56:32.446153 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.446238 kubelet[2707]: E0130 12:56:32.446201 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.446980 kubelet[2707]: E0130 12:56:32.446834 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.446980 kubelet[2707]: W0130 12:56:32.446977 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.447113 kubelet[2707]: E0130 12:56:32.447012 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.448178 kubelet[2707]: E0130 12:56:32.447214 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.448178 kubelet[2707]: W0130 12:56:32.447226 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.448178 kubelet[2707]: E0130 12:56:32.447255 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.448178 kubelet[2707]: E0130 12:56:32.447398 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.448178 kubelet[2707]: W0130 12:56:32.447406 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.448178 kubelet[2707]: E0130 12:56:32.447426 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.448178 kubelet[2707]: E0130 12:56:32.447545 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.448178 kubelet[2707]: W0130 12:56:32.447552 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.448178 kubelet[2707]: E0130 12:56:32.447578 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.448178 kubelet[2707]: E0130 12:56:32.448048 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.448430 kubelet[2707]: W0130 12:56:32.448065 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.448430 kubelet[2707]: E0130 12:56:32.448089 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.448599 kubelet[2707]: E0130 12:56:32.448581 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.448599 kubelet[2707]: W0130 12:56:32.448596 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.448670 kubelet[2707]: E0130 12:56:32.448648 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.448806 kubelet[2707]: E0130 12:56:32.448779 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.448806 kubelet[2707]: W0130 12:56:32.448792 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.448911 kubelet[2707]: E0130 12:56:32.448814 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.448978 kubelet[2707]: E0130 12:56:32.448955 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.448978 kubelet[2707]: W0130 12:56:32.448966 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.449131 kubelet[2707]: E0130 12:56:32.449111 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.449881 kubelet[2707]: E0130 12:56:32.449472 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.450232 kubelet[2707]: W0130 12:56:32.450210 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.451408 kubelet[2707]: E0130 12:56:32.451299 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.451971 kubelet[2707]: E0130 12:56:32.451952 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.452541 kubelet[2707]: W0130 12:56:32.452425 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.452541 kubelet[2707]: E0130 12:56:32.452467 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.455796 kubelet[2707]: E0130 12:56:32.455354 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.455796 kubelet[2707]: W0130 12:56:32.455419 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.455796 kubelet[2707]: E0130 12:56:32.455438 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.457622 kubelet[2707]: E0130 12:56:32.457446 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.457622 kubelet[2707]: W0130 12:56:32.457467 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.457622 kubelet[2707]: E0130 12:56:32.457487 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.476621 kubelet[2707]: E0130 12:56:32.476593 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:32.479944 kubelet[2707]: W0130 12:56:32.477138 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:32.479944 kubelet[2707]: E0130 12:56:32.477177 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:32.483228 containerd[1534]: time="2025-01-30T12:56:32.483124306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:32.483228 containerd[1534]: time="2025-01-30T12:56:32.483194867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:32.483382 containerd[1534]: time="2025-01-30T12:56:32.483209787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:32.483382 containerd[1534]: time="2025-01-30T12:56:32.483305830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:32.529401 containerd[1534]: time="2025-01-30T12:56:32.529362569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mfhh,Uid:0f9b1e42-b8df-4280-a89e-77312805efe6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b0433c358f504d03ac5aefb86cf19cc544eedca683010f7201405560af22236\"" Jan 30 12:56:32.530245 kubelet[2707]: E0130 12:56:32.530221 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:33.371942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628901232.mount: Deactivated successfully. Jan 30 12:56:33.662703 containerd[1534]: time="2025-01-30T12:56:33.661860840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:33.662703 containerd[1534]: time="2025-01-30T12:56:33.662590216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 12:56:33.663232 containerd[1534]: time="2025-01-30T12:56:33.663193548Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:33.677479 containerd[1534]: time="2025-01-30T12:56:33.677431090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:33.678081 containerd[1534]: time="2025-01-30T12:56:33.678045423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.268087696s" Jan 30 12:56:33.678203 containerd[1534]: time="2025-01-30T12:56:33.678145345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 12:56:33.679484 containerd[1534]: time="2025-01-30T12:56:33.679335970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 12:56:33.702065 containerd[1534]: time="2025-01-30T12:56:33.701910128Z" level=info msg="CreateContainer within sandbox \"f2eee0c2d83ef025b083a3216d741a93b6e3cb786357cd3ac7a46a9ec1087de3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 12:56:33.711344 containerd[1534]: time="2025-01-30T12:56:33.711206125Z" level=info msg="CreateContainer within sandbox \"f2eee0c2d83ef025b083a3216d741a93b6e3cb786357cd3ac7a46a9ec1087de3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ce4c88e2f15e61e1d632be79d71af6c89734fb46438207f87b4054d2bee8c6e7\"" Jan 30 12:56:33.712952 containerd[1534]: time="2025-01-30T12:56:33.711902860Z" level=info msg="StartContainer for \"ce4c88e2f15e61e1d632be79d71af6c89734fb46438207f87b4054d2bee8c6e7\"" Jan 30 12:56:33.774992 containerd[1534]: time="2025-01-30T12:56:33.774949875Z" level=info msg="StartContainer for \"ce4c88e2f15e61e1d632be79d71af6c89734fb46438207f87b4054d2bee8c6e7\" returns successfully" Jan 30 12:56:34.192323 kubelet[2707]: E0130 12:56:34.192095 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwjj9" podUID="41953e49-b598-4079-bbdf-ef9c599ebe81" Jan 30 12:56:34.291176 kubelet[2707]: E0130 12:56:34.291133 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:34.301499 kubelet[2707]: I0130 12:56:34.301446 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fffdd9b97-bhdgk" podStartSLOduration=1.031840748 podStartE2EDuration="2.301430757s" podCreationTimestamp="2025-01-30 12:56:32 +0000 UTC" firstStartedPulling="2025-01-30 12:56:32.409583318 +0000 UTC m=+23.323493479" lastFinishedPulling="2025-01-30 12:56:33.679173327 +0000 UTC m=+24.593083488" observedRunningTime="2025-01-30 12:56:34.300748903 +0000 UTC m=+25.214659064" watchObservedRunningTime="2025-01-30 12:56:34.301430757 +0000 UTC m=+25.215340918" Jan 30 12:56:34.339437 kubelet[2707]: E0130 12:56:34.339396 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.339437 kubelet[2707]: W0130 12:56:34.339420 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.339595 kubelet[2707]: E0130 12:56:34.339467 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.340018 kubelet[2707]: E0130 12:56:34.339731 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.340018 kubelet[2707]: W0130 12:56:34.339746 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.340018 kubelet[2707]: E0130 12:56:34.339775 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.340018 kubelet[2707]: E0130 12:56:34.339997 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.340018 kubelet[2707]: W0130 12:56:34.340007 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.340018 kubelet[2707]: E0130 12:56:34.340017 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.340660 kubelet[2707]: E0130 12:56:34.340250 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.340660 kubelet[2707]: W0130 12:56:34.340264 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.340660 kubelet[2707]: E0130 12:56:34.340273 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.340660 kubelet[2707]: E0130 12:56:34.340442 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.340660 kubelet[2707]: W0130 12:56:34.340451 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.340660 kubelet[2707]: E0130 12:56:34.340464 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.340660 kubelet[2707]: E0130 12:56:34.340666 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.340959 kubelet[2707]: W0130 12:56:34.340675 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.340959 kubelet[2707]: E0130 12:56:34.340684 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.340959 kubelet[2707]: E0130 12:56:34.340873 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.340959 kubelet[2707]: W0130 12:56:34.340882 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.340959 kubelet[2707]: E0130 12:56:34.340890 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.341411 kubelet[2707]: E0130 12:56:34.341070 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.341411 kubelet[2707]: W0130 12:56:34.341078 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.341411 kubelet[2707]: E0130 12:56:34.341086 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.341411 kubelet[2707]: E0130 12:56:34.341249 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.341411 kubelet[2707]: W0130 12:56:34.341260 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.341411 kubelet[2707]: E0130 12:56:34.341275 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.341411 kubelet[2707]: E0130 12:56:34.341402 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.341654 kubelet[2707]: W0130 12:56:34.341421 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.341654 kubelet[2707]: E0130 12:56:34.341432 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.341654 kubelet[2707]: E0130 12:56:34.341605 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.341654 kubelet[2707]: W0130 12:56:34.341613 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.341654 kubelet[2707]: E0130 12:56:34.341620 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.341850 kubelet[2707]: E0130 12:56:34.341837 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.341850 kubelet[2707]: W0130 12:56:34.341849 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.341906 kubelet[2707]: E0130 12:56:34.341857 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.342060 kubelet[2707]: E0130 12:56:34.342046 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.342095 kubelet[2707]: W0130 12:56:34.342059 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.342095 kubelet[2707]: E0130 12:56:34.342070 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.342244 kubelet[2707]: E0130 12:56:34.342231 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.342244 kubelet[2707]: W0130 12:56:34.342242 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.342302 kubelet[2707]: E0130 12:56:34.342250 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.342412 kubelet[2707]: E0130 12:56:34.342389 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.342412 kubelet[2707]: W0130 12:56:34.342411 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.342486 kubelet[2707]: E0130 12:56:34.342421 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.356670 kubelet[2707]: E0130 12:56:34.356635 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.356670 kubelet[2707]: W0130 12:56:34.356659 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.356812 kubelet[2707]: E0130 12:56:34.356678 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.356890 kubelet[2707]: E0130 12:56:34.356877 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.356928 kubelet[2707]: W0130 12:56:34.356891 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.356928 kubelet[2707]: E0130 12:56:34.356907 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.357135 kubelet[2707]: E0130 12:56:34.357123 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.357169 kubelet[2707]: W0130 12:56:34.357134 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.357169 kubelet[2707]: E0130 12:56:34.357152 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.361007 kubelet[2707]: E0130 12:56:34.360979 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.361007 kubelet[2707]: W0130 12:56:34.361000 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.361146 kubelet[2707]: E0130 12:56:34.361035 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.361280 kubelet[2707]: E0130 12:56:34.361257 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.361280 kubelet[2707]: W0130 12:56:34.361273 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.361353 kubelet[2707]: E0130 12:56:34.361323 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.361455 kubelet[2707]: E0130 12:56:34.361443 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.361499 kubelet[2707]: W0130 12:56:34.361455 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.361499 kubelet[2707]: E0130 12:56:34.361488 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.361616 kubelet[2707]: E0130 12:56:34.361605 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.361646 kubelet[2707]: W0130 12:56:34.361616 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.361646 kubelet[2707]: E0130 12:56:34.361635 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.361789 kubelet[2707]: E0130 12:56:34.361758 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.361789 kubelet[2707]: W0130 12:56:34.361771 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.361789 kubelet[2707]: E0130 12:56:34.361786 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.361990 kubelet[2707]: E0130 12:56:34.361978 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.362021 kubelet[2707]: W0130 12:56:34.361990 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.362021 kubelet[2707]: E0130 12:56:34.362012 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.362269 kubelet[2707]: E0130 12:56:34.362248 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.362315 kubelet[2707]: W0130 12:56:34.362269 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.362315 kubelet[2707]: E0130 12:56:34.362289 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.362498 kubelet[2707]: E0130 12:56:34.362487 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.362498 kubelet[2707]: W0130 12:56:34.362498 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.362556 kubelet[2707]: E0130 12:56:34.362512 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.362699 kubelet[2707]: E0130 12:56:34.362687 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.362699 kubelet[2707]: W0130 12:56:34.362698 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.362769 kubelet[2707]: E0130 12:56:34.362712 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.363204 kubelet[2707]: E0130 12:56:34.363103 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.363204 kubelet[2707]: W0130 12:56:34.363121 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.363204 kubelet[2707]: E0130 12:56:34.363141 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.363364 kubelet[2707]: E0130 12:56:34.363352 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.363426 kubelet[2707]: W0130 12:56:34.363414 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.363545 kubelet[2707]: E0130 12:56:34.363511 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.363736 kubelet[2707]: E0130 12:56:34.363655 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.363736 kubelet[2707]: W0130 12:56:34.363667 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.363736 kubelet[2707]: E0130 12:56:34.363689 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.363886 kubelet[2707]: E0130 12:56:34.363874 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.363969 kubelet[2707]: W0130 12:56:34.363956 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.364054 kubelet[2707]: E0130 12:56:34.364042 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.364273 kubelet[2707]: E0130 12:56:34.364257 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.364273 kubelet[2707]: W0130 12:56:34.364271 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.364332 kubelet[2707]: E0130 12:56:34.364282 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.364617 kubelet[2707]: E0130 12:56:34.364592 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:56:34.364617 kubelet[2707]: W0130 12:56:34.364605 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:56:34.364617 kubelet[2707]: E0130 12:56:34.364614 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:56:34.567922 containerd[1534]: time="2025-01-30T12:56:34.567767280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:34.569602 containerd[1534]: time="2025-01-30T12:56:34.568540895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 12:56:34.571500 containerd[1534]: time="2025-01-30T12:56:34.571469915Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:34.581490 containerd[1534]: time="2025-01-30T12:56:34.581458157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:34.582070 containerd[1534]: time="2025-01-30T12:56:34.582040129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 902.654158ms" Jan 30 12:56:34.582178 containerd[1534]: time="2025-01-30T12:56:34.582072650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 12:56:34.584766 containerd[1534]: time="2025-01-30T12:56:34.584677943Z" level=info msg="CreateContainer within sandbox \"5b0433c358f504d03ac5aefb86cf19cc544eedca683010f7201405560af22236\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 12:56:34.600083 containerd[1534]: time="2025-01-30T12:56:34.600022014Z" level=info msg="CreateContainer within sandbox \"5b0433c358f504d03ac5aefb86cf19cc544eedca683010f7201405560af22236\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe7b2ef33c482c1b88c56670f4891e8849fe9b3ab0ce1ae9e8eded6928211f4d\"" Jan 30 12:56:34.601371 containerd[1534]: time="2025-01-30T12:56:34.600524064Z" level=info msg="StartContainer for \"fe7b2ef33c482c1b88c56670f4891e8849fe9b3ab0ce1ae9e8eded6928211f4d\"" Jan 30 12:56:34.717964 containerd[1534]: time="2025-01-30T12:56:34.717823604Z" level=info msg="StartContainer for \"fe7b2ef33c482c1b88c56670f4891e8849fe9b3ab0ce1ae9e8eded6928211f4d\" returns successfully" Jan 30 12:56:34.748003 containerd[1534]: time="2025-01-30T12:56:34.747932574Z" level=info msg="shim disconnected" id=fe7b2ef33c482c1b88c56670f4891e8849fe9b3ab0ce1ae9e8eded6928211f4d namespace=k8s.io Jan 30 12:56:34.748523 containerd[1534]: time="2025-01-30T12:56:34.748321222Z" level=warning msg="cleaning up after shim disconnected" id=fe7b2ef33c482c1b88c56670f4891e8849fe9b3ab0ce1ae9e8eded6928211f4d namespace=k8s.io Jan 30 12:56:34.748523 containerd[1534]: time="2025-01-30T12:56:34.748352303Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:35.237117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe7b2ef33c482c1b88c56670f4891e8849fe9b3ab0ce1ae9e8eded6928211f4d-rootfs.mount: Deactivated successfully. Jan 30 12:56:35.301612 kubelet[2707]: I0130 12:56:35.300053 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:56:35.301612 kubelet[2707]: E0130 12:56:35.300667 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:35.301612 kubelet[2707]: E0130 12:56:35.301308 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:35.303193 containerd[1534]: time="2025-01-30T12:56:35.303161265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 12:56:36.191854 kubelet[2707]: E0130 12:56:36.191792 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwjj9" podUID="41953e49-b598-4079-bbdf-ef9c599ebe81" Jan 30 12:56:37.388821 containerd[1534]: time="2025-01-30T12:56:37.388619334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:37.394035 containerd[1534]: time="2025-01-30T12:56:37.393207696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 12:56:37.395762 containerd[1534]: time="2025-01-30T12:56:37.395718061Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:37.398352 containerd[1534]: time="2025-01-30T12:56:37.398305468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:37.400219 containerd[1534]: time="2025-01-30T12:56:37.400168301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.096967475s" Jan 30 12:56:37.400219 containerd[1534]: time="2025-01-30T12:56:37.400212222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 12:56:37.406170 containerd[1534]: time="2025-01-30T12:56:37.405002428Z" level=info msg="CreateContainer within sandbox \"5b0433c358f504d03ac5aefb86cf19cc544eedca683010f7201405560af22236\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 12:56:37.418785 containerd[1534]: time="2025-01-30T12:56:37.418718674Z" level=info msg="CreateContainer within sandbox \"5b0433c358f504d03ac5aefb86cf19cc544eedca683010f7201405560af22236\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"097f7c989398b83b5435f3e0df500773f5ce672caf99fee9d884b852db4359a0\"" Jan 30 12:56:37.419333 containerd[1534]: time="2025-01-30T12:56:37.419302924Z" level=info msg="StartContainer for \"097f7c989398b83b5435f3e0df500773f5ce672caf99fee9d884b852db4359a0\"" Jan 30 12:56:37.479876 containerd[1534]: time="2025-01-30T12:56:37.479826210Z" level=info msg="StartContainer for \"097f7c989398b83b5435f3e0df500773f5ce672caf99fee9d884b852db4359a0\" returns successfully" Jan 30 12:56:38.183390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-097f7c989398b83b5435f3e0df500773f5ce672caf99fee9d884b852db4359a0-rootfs.mount: Deactivated successfully. Jan 30 12:56:38.191911 kubelet[2707]: E0130 12:56:38.191855 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwjj9" podUID="41953e49-b598-4079-bbdf-ef9c599ebe81" Jan 30 12:56:38.200451 containerd[1534]: time="2025-01-30T12:56:38.200378513Z" level=info msg="shim disconnected" id=097f7c989398b83b5435f3e0df500773f5ce672caf99fee9d884b852db4359a0 namespace=k8s.io Jan 30 12:56:38.200451 containerd[1534]: time="2025-01-30T12:56:38.200440794Z" level=warning msg="cleaning up after shim disconnected" id=097f7c989398b83b5435f3e0df500773f5ce672caf99fee9d884b852db4359a0 namespace=k8s.io Jan 30 12:56:38.200451 containerd[1534]: time="2025-01-30T12:56:38.200450634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:38.217537 kubelet[2707]: I0130 12:56:38.217508 2707 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 12:56:38.249674 kubelet[2707]: I0130 12:56:38.249620 2707 topology_manager.go:215] "Topology Admit Handler" podUID="215e9ede-a56b-419e-b3ac-485389adba02" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mlbrv" Jan 30 12:56:38.250773 kubelet[2707]: I0130 12:56:38.250725 2707 topology_manager.go:215] "Topology Admit Handler" podUID="c56171da-3422-46f6-bd03-661a74a240ac" podNamespace="calico-apiserver" podName="calico-apiserver-5b884f9b9b-jnhh7" Jan 30 12:56:38.256901 kubelet[2707]: I0130 12:56:38.255984 2707 topology_manager.go:215] "Topology Admit Handler" podUID="c8a23043-6bab-4618-a927-3b2c52ff66a4" podNamespace="calico-apiserver" podName="calico-apiserver-5b884f9b9b-2wgjx" Jan 30 12:56:38.258923 kubelet[2707]: I0130 12:56:38.258880 2707 topology_manager.go:215] "Topology Admit Handler" podUID="f5238e11-c884-4750-8685-6bc2db2bcd69" podNamespace="calico-system" podName="calico-kube-controllers-69d5ff6878-vn947" Jan 30 12:56:38.259768 kubelet[2707]: I0130 12:56:38.259538 2707 topology_manager.go:215] "Topology Admit Handler" podUID="cf7d4427-4304-4d39-a328-49dbc3c64e9c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jqtfs" Jan 30 12:56:38.311389 kubelet[2707]: E0130 12:56:38.311206 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:38.312640 containerd[1534]: time="2025-01-30T12:56:38.312579008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 12:56:38.388668 kubelet[2707]: I0130 12:56:38.388599 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh4ks\" (UniqueName: \"kubernetes.io/projected/f5238e11-c884-4750-8685-6bc2db2bcd69-kube-api-access-bh4ks\") pod \"calico-kube-controllers-69d5ff6878-vn947\" (UID: \"f5238e11-c884-4750-8685-6bc2db2bcd69\") " pod="calico-system/calico-kube-controllers-69d5ff6878-vn947" Jan 30 12:56:38.388668 kubelet[2707]: I0130 12:56:38.388652 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsjb5\" (UniqueName: \"kubernetes.io/projected/cf7d4427-4304-4d39-a328-49dbc3c64e9c-kube-api-access-nsjb5\") pod \"coredns-7db6d8ff4d-jqtfs\" (UID: \"cf7d4427-4304-4d39-a328-49dbc3c64e9c\") " pod="kube-system/coredns-7db6d8ff4d-jqtfs" Jan 30 12:56:38.388668 kubelet[2707]: I0130 12:56:38.388676 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjv6\" (UniqueName: \"kubernetes.io/projected/c56171da-3422-46f6-bd03-661a74a240ac-kube-api-access-ptjv6\") pod \"calico-apiserver-5b884f9b9b-jnhh7\" (UID: \"c56171da-3422-46f6-bd03-661a74a240ac\") " pod="calico-apiserver/calico-apiserver-5b884f9b9b-jnhh7" Jan 30 12:56:38.395584 kubelet[2707]: I0130 12:56:38.388693 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf7d4427-4304-4d39-a328-49dbc3c64e9c-config-volume\") pod \"coredns-7db6d8ff4d-jqtfs\" (UID: \"cf7d4427-4304-4d39-a328-49dbc3c64e9c\") " pod="kube-system/coredns-7db6d8ff4d-jqtfs" Jan 30 12:56:38.395720 kubelet[2707]: I0130 12:56:38.395609 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c56171da-3422-46f6-bd03-661a74a240ac-calico-apiserver-certs\") pod \"calico-apiserver-5b884f9b9b-jnhh7\" (UID: \"c56171da-3422-46f6-bd03-661a74a240ac\") " pod="calico-apiserver/calico-apiserver-5b884f9b9b-jnhh7" Jan 30 12:56:38.395720 kubelet[2707]: I0130 12:56:38.395634 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8a23043-6bab-4618-a927-3b2c52ff66a4-calico-apiserver-certs\") pod \"calico-apiserver-5b884f9b9b-2wgjx\" (UID: \"c8a23043-6bab-4618-a927-3b2c52ff66a4\") " pod="calico-apiserver/calico-apiserver-5b884f9b9b-2wgjx" Jan 30 12:56:38.395720 kubelet[2707]: I0130 12:56:38.395703 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/215e9ede-a56b-419e-b3ac-485389adba02-config-volume\") pod \"coredns-7db6d8ff4d-mlbrv\" (UID: \"215e9ede-a56b-419e-b3ac-485389adba02\") " pod="kube-system/coredns-7db6d8ff4d-mlbrv" Jan 30 12:56:38.395979 kubelet[2707]: I0130 12:56:38.395723 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5238e11-c884-4750-8685-6bc2db2bcd69-tigera-ca-bundle\") pod \"calico-kube-controllers-69d5ff6878-vn947\" (UID: \"f5238e11-c884-4750-8685-6bc2db2bcd69\") " pod="calico-system/calico-kube-controllers-69d5ff6878-vn947" Jan 30 12:56:38.395979 kubelet[2707]: I0130 12:56:38.395742 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq74t\" (UniqueName: \"kubernetes.io/projected/215e9ede-a56b-419e-b3ac-485389adba02-kube-api-access-kq74t\") pod \"coredns-7db6d8ff4d-mlbrv\" (UID: \"215e9ede-a56b-419e-b3ac-485389adba02\") " pod="kube-system/coredns-7db6d8ff4d-mlbrv" Jan 30 12:56:38.395979 kubelet[2707]: I0130 12:56:38.395769 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nvb\" (UniqueName: \"kubernetes.io/projected/c8a23043-6bab-4618-a927-3b2c52ff66a4-kube-api-access-p2nvb\") pod \"calico-apiserver-5b884f9b9b-2wgjx\" (UID: \"c8a23043-6bab-4618-a927-3b2c52ff66a4\") " pod="calico-apiserver/calico-apiserver-5b884f9b9b-2wgjx" Jan 30 12:56:38.556847 containerd[1534]: time="2025-01-30T12:56:38.556795099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-jnhh7,Uid:c56171da-3422-46f6-bd03-661a74a240ac,Namespace:calico-apiserver,Attempt:0,}" Jan 30 12:56:38.558235 kubelet[2707]: E0130 12:56:38.558196 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:38.558783 containerd[1534]: time="2025-01-30T12:56:38.558743893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlbrv,Uid:215e9ede-a56b-419e-b3ac-485389adba02,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:38.566850 kubelet[2707]: E0130 12:56:38.566821 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:38.567612 containerd[1534]: time="2025-01-30T12:56:38.567196238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-2wgjx,Uid:c8a23043-6bab-4618-a927-3b2c52ff66a4,Namespace:calico-apiserver,Attempt:0,}" Jan 30 12:56:38.567612 containerd[1534]: time="2025-01-30T12:56:38.567199838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d5ff6878-vn947,Uid:f5238e11-c884-4750-8685-6bc2db2bcd69,Namespace:calico-system,Attempt:0,}" Jan 30 12:56:38.567871 containerd[1534]: time="2025-01-30T12:56:38.567843810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jqtfs,Uid:cf7d4427-4304-4d39-a328-49dbc3c64e9c,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:39.161312 containerd[1534]: time="2025-01-30T12:56:39.161188137Z" level=error msg="Failed to destroy network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.166210 containerd[1534]: time="2025-01-30T12:56:39.166059538Z" level=error msg="Failed to destroy network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.166410 containerd[1534]: time="2025-01-30T12:56:39.166366703Z" level=error msg="encountered an error cleaning up failed sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.166464 containerd[1534]: time="2025-01-30T12:56:39.166438185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-jnhh7,Uid:c56171da-3422-46f6-bd03-661a74a240ac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.167094 containerd[1534]: time="2025-01-30T12:56:39.166939153Z" level=error msg="encountered an error cleaning up failed sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.167094 containerd[1534]: time="2025-01-30T12:56:39.166994594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d5ff6878-vn947,Uid:f5238e11-c884-4750-8685-6bc2db2bcd69,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.168809 kubelet[2707]: E0130 12:56:39.168629 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.168809 kubelet[2707]: E0130 12:56:39.168726 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69d5ff6878-vn947" Jan 30 12:56:39.168809 kubelet[2707]: E0130 12:56:39.168754 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69d5ff6878-vn947" Jan 30 12:56:39.169267 kubelet[2707]: E0130 12:56:39.168818 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.169267 kubelet[2707]: E0130 12:56:39.168935 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b884f9b9b-jnhh7" Jan 30 12:56:39.169267 kubelet[2707]: E0130 12:56:39.168960 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b884f9b9b-jnhh7" Jan 30 12:56:39.169365 kubelet[2707]: E0130 12:56:39.169151 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69d5ff6878-vn947_calico-system(f5238e11-c884-4750-8685-6bc2db2bcd69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69d5ff6878-vn947_calico-system(f5238e11-c884-4750-8685-6bc2db2bcd69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69d5ff6878-vn947" podUID="f5238e11-c884-4750-8685-6bc2db2bcd69" Jan 30 12:56:39.169365 kubelet[2707]: E0130 12:56:39.169207 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b884f9b9b-jnhh7_calico-apiserver(c56171da-3422-46f6-bd03-661a74a240ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b884f9b9b-jnhh7_calico-apiserver(c56171da-3422-46f6-bd03-661a74a240ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b884f9b9b-jnhh7" podUID="c56171da-3422-46f6-bd03-661a74a240ac" Jan 30 12:56:39.170480 containerd[1534]: time="2025-01-30T12:56:39.170212367Z" level=error msg="Failed to destroy network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.170667 containerd[1534]: time="2025-01-30T12:56:39.170630214Z" level=error msg="encountered an error cleaning up failed sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.170716 containerd[1534]: time="2025-01-30T12:56:39.170691215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-2wgjx,Uid:c8a23043-6bab-4618-a927-3b2c52ff66a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.171305 kubelet[2707]: E0130 12:56:39.171253 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.171383 kubelet[2707]: E0130 12:56:39.171313 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b884f9b9b-2wgjx" Jan 30 12:56:39.171383 kubelet[2707]: E0130 12:56:39.171335 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b884f9b9b-2wgjx" Jan 30 12:56:39.171434 kubelet[2707]: E0130 12:56:39.171373 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b884f9b9b-2wgjx_calico-apiserver(c8a23043-6bab-4618-a927-3b2c52ff66a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b884f9b9b-2wgjx_calico-apiserver(c8a23043-6bab-4618-a927-3b2c52ff66a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b884f9b9b-2wgjx" podUID="c8a23043-6bab-4618-a927-3b2c52ff66a4" Jan 30 12:56:39.173419 containerd[1534]: time="2025-01-30T12:56:39.172887332Z" level=error msg="Failed to destroy network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.173419 containerd[1534]: time="2025-01-30T12:56:39.173255098Z" level=error msg="encountered an error cleaning up failed sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.173419 containerd[1534]: time="2025-01-30T12:56:39.173297978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jqtfs,Uid:cf7d4427-4304-4d39-a328-49dbc3c64e9c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.173828 containerd[1534]: time="2025-01-30T12:56:39.173770546Z" level=error msg="Failed to destroy network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.173874 kubelet[2707]: E0130 12:56:39.173483 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.173874 kubelet[2707]: E0130 12:56:39.173539 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jqtfs" Jan 30 12:56:39.173874 kubelet[2707]: E0130 12:56:39.173557 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jqtfs" Jan 30 12:56:39.173976 kubelet[2707]: E0130 12:56:39.173600 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jqtfs_kube-system(cf7d4427-4304-4d39-a328-49dbc3c64e9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jqtfs_kube-system(cf7d4427-4304-4d39-a328-49dbc3c64e9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jqtfs" podUID="cf7d4427-4304-4d39-a328-49dbc3c64e9c" Jan 30 12:56:39.174776 kubelet[2707]: E0130 12:56:39.174358 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.174776 kubelet[2707]: E0130 12:56:39.174397 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mlbrv" Jan 30 12:56:39.174776 kubelet[2707]: E0130 12:56:39.174413 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mlbrv" Jan 30 12:56:39.174908 containerd[1534]: time="2025-01-30T12:56:39.174116392Z" level=error msg="encountered an error cleaning up failed sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.174908 containerd[1534]: time="2025-01-30T12:56:39.174200593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlbrv,Uid:215e9ede-a56b-419e-b3ac-485389adba02,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.175128 kubelet[2707]: E0130 12:56:39.174449 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mlbrv_kube-system(215e9ede-a56b-419e-b3ac-485389adba02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mlbrv_kube-system(215e9ede-a56b-419e-b3ac-485389adba02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mlbrv" podUID="215e9ede-a56b-419e-b3ac-485389adba02" Jan 30 12:56:39.323142 kubelet[2707]: I0130 12:56:39.323105 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:56:39.325254 containerd[1534]: time="2025-01-30T12:56:39.324819093Z" level=info msg="StopPodSandbox for \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\"" Jan 30 12:56:39.325254 containerd[1534]: time="2025-01-30T12:56:39.325000816Z" level=info msg="Ensure that sandbox 02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35 in task-service has been cleanup successfully" Jan 30 12:56:39.342225 kubelet[2707]: I0130 12:56:39.341586 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:56:39.342373 containerd[1534]: time="2025-01-30T12:56:39.342340904Z" level=info msg="StopPodSandbox for \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\"" Jan 30 12:56:39.343685 containerd[1534]: time="2025-01-30T12:56:39.342503987Z" level=info msg="Ensure that sandbox 1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d in task-service has been cleanup successfully" Jan 30 12:56:39.346112 kubelet[2707]: I0130 12:56:39.346087 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:56:39.347904 kubelet[2707]: I0130 12:56:39.347560 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:56:39.348168 containerd[1534]: time="2025-01-30T12:56:39.348106160Z" level=info msg="StopPodSandbox for \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\"" Jan 30 12:56:39.348715 containerd[1534]: time="2025-01-30T12:56:39.348373604Z" level=info msg="StopPodSandbox for \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\"" Jan 30 12:56:39.349114 containerd[1534]: time="2025-01-30T12:56:39.349002495Z" level=info msg="Ensure that sandbox ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3 in task-service has been cleanup successfully" Jan 30 12:56:39.350132 kubelet[2707]: I0130 12:56:39.350105 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:56:39.351236 containerd[1534]: time="2025-01-30T12:56:39.351005488Z" level=info msg="Ensure that sandbox 3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552 in task-service has been cleanup successfully" Jan 30 12:56:39.351236 containerd[1534]: time="2025-01-30T12:56:39.351096649Z" level=info msg="StopPodSandbox for \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\"" Jan 30 12:56:39.351879 containerd[1534]: time="2025-01-30T12:56:39.351299253Z" level=info msg="Ensure that sandbox abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4 in task-service has been cleanup successfully" Jan 30 12:56:39.392203 containerd[1534]: time="2025-01-30T12:56:39.392003768Z" level=error msg="StopPodSandbox for \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\" failed" error="failed to destroy network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.393257 kubelet[2707]: E0130 12:56:39.392309 2707 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:56:39.393257 kubelet[2707]: E0130 12:56:39.392366 2707 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d"} Jan 30 12:56:39.393257 kubelet[2707]: E0130 12:56:39.392424 2707 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf7d4427-4304-4d39-a328-49dbc3c64e9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 12:56:39.393257 kubelet[2707]: E0130 12:56:39.392445 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf7d4427-4304-4d39-a328-49dbc3c64e9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jqtfs" podUID="cf7d4427-4304-4d39-a328-49dbc3c64e9c" Jan 30 12:56:39.408540 containerd[1534]: time="2025-01-30T12:56:39.408381120Z" level=error msg="StopPodSandbox for \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\" failed" error="failed to destroy network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.408733 kubelet[2707]: E0130 12:56:39.408627 2707 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:56:39.408733 kubelet[2707]: E0130 12:56:39.408699 2707 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35"} Jan 30 12:56:39.408941 kubelet[2707]: E0130 12:56:39.408737 2707 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5238e11-c884-4750-8685-6bc2db2bcd69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 12:56:39.408941 kubelet[2707]: E0130 12:56:39.408768 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5238e11-c884-4750-8685-6bc2db2bcd69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69d5ff6878-vn947" podUID="f5238e11-c884-4750-8685-6bc2db2bcd69" Jan 30 12:56:39.413945 containerd[1534]: time="2025-01-30T12:56:39.413647168Z" level=error msg="StopPodSandbox for \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\" failed" error="failed to destroy network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.414357 kubelet[2707]: E0130 12:56:39.414138 2707 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:56:39.414357 kubelet[2707]: E0130 12:56:39.414325 2707 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3"} Jan 30 12:56:39.414357 kubelet[2707]: E0130 12:56:39.414366 2707 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"215e9ede-a56b-419e-b3ac-485389adba02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 12:56:39.414580 kubelet[2707]: E0130 12:56:39.414393 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"215e9ede-a56b-419e-b3ac-485389adba02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mlbrv" podUID="215e9ede-a56b-419e-b3ac-485389adba02" Jan 30 12:56:39.417159 containerd[1534]: time="2025-01-30T12:56:39.417059504Z" level=error msg="StopPodSandbox for \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\" failed" error="failed to destroy network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.417320 kubelet[2707]: E0130 12:56:39.417272 2707 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:56:39.417392 kubelet[2707]: E0130 12:56:39.417332 2707 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552"} Jan 30 12:56:39.417392 kubelet[2707]: E0130 12:56:39.417366 2707 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8a23043-6bab-4618-a927-3b2c52ff66a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 12:56:39.417488 kubelet[2707]: E0130 12:56:39.417387 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8a23043-6bab-4618-a927-3b2c52ff66a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b884f9b9b-2wgjx" podUID="c8a23043-6bab-4618-a927-3b2c52ff66a4" Jan 30 12:56:39.422265 containerd[1534]: time="2025-01-30T12:56:39.422216270Z" level=error msg="StopPodSandbox for \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\" failed" error="failed to destroy network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:39.422460 kubelet[2707]: E0130 12:56:39.422429 2707 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:56:39.422541 kubelet[2707]: E0130 12:56:39.422477 2707 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4"} Jan 30 12:56:39.422541 kubelet[2707]: E0130 12:56:39.422521 2707 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c56171da-3422-46f6-bd03-661a74a240ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 12:56:39.422637 kubelet[2707]: E0130 12:56:39.422541 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c56171da-3422-46f6-bd03-661a74a240ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b884f9b9b-jnhh7" podUID="c56171da-3422-46f6-bd03-661a74a240ac" Jan 30 12:56:39.504017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4-shm.mount: Deactivated successfully. Jan 30 12:56:40.193901 containerd[1534]: time="2025-01-30T12:56:40.193861921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwjj9,Uid:41953e49-b598-4079-bbdf-ef9c599ebe81,Namespace:calico-system,Attempt:0,}" Jan 30 12:56:40.309445 containerd[1534]: time="2025-01-30T12:56:40.308319351Z" level=error msg="Failed to destroy network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:40.309445 containerd[1534]: time="2025-01-30T12:56:40.308657237Z" level=error msg="encountered an error cleaning up failed sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:40.309445 containerd[1534]: time="2025-01-30T12:56:40.308706597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwjj9,Uid:41953e49-b598-4079-bbdf-ef9c599ebe81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:40.309724 kubelet[2707]: E0130 12:56:40.308908 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:40.309724 kubelet[2707]: E0130 12:56:40.308982 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwjj9" Jan 30 12:56:40.309724 kubelet[2707]: E0130 12:56:40.309001 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwjj9" Jan 30 12:56:40.309885 kubelet[2707]: E0130 12:56:40.309063 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwjj9_calico-system(41953e49-b598-4079-bbdf-ef9c599ebe81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwjj9_calico-system(41953e49-b598-4079-bbdf-ef9c599ebe81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwjj9" podUID="41953e49-b598-4079-bbdf-ef9c599ebe81" Jan 30 12:56:40.310643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b-shm.mount: Deactivated successfully. Jan 30 12:56:40.353294 kubelet[2707]: I0130 12:56:40.353162 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:56:40.354124 containerd[1534]: time="2025-01-30T12:56:40.353934841Z" level=info msg="StopPodSandbox for \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\"" Jan 30 12:56:40.354285 containerd[1534]: time="2025-01-30T12:56:40.354153724Z" level=info msg="Ensure that sandbox 4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b in task-service has been cleanup successfully" Jan 30 12:56:40.386059 containerd[1534]: time="2025-01-30T12:56:40.385885792Z" level=error msg="StopPodSandbox for \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\" failed" error="failed to destroy network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:56:40.386286 kubelet[2707]: E0130 12:56:40.386227 2707 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:56:40.386355 kubelet[2707]: E0130 12:56:40.386296 2707 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b"} Jan 30 12:56:40.386355 kubelet[2707]: E0130 12:56:40.386330 2707 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41953e49-b598-4079-bbdf-ef9c599ebe81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 12:56:40.386439 kubelet[2707]: E0130 12:56:40.386362 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41953e49-b598-4079-bbdf-ef9c599ebe81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwjj9" podUID="41953e49-b598-4079-bbdf-ef9c599ebe81" Jan 30 12:56:41.474273 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:57406.service - OpenSSH per-connection server daemon (10.0.0.1:57406). Jan 30 12:56:41.534213 sshd[3852]: Accepted publickey for core from 10.0.0.1 port 57406 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:41.536271 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:41.544305 systemd-logind[1515]: New session 8 of user core. Jan 30 12:56:41.550440 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 12:56:41.710602 sshd[3852]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:41.715361 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:57406.service: Deactivated successfully. Jan 30 12:56:41.717958 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Jan 30 12:56:41.718053 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 12:56:41.720421 systemd-logind[1515]: Removed session 8. Jan 30 12:56:41.932340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208960200.mount: Deactivated successfully. Jan 30 12:56:42.121629 containerd[1534]: time="2025-01-30T12:56:42.121576330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:42.122422 containerd[1534]: time="2025-01-30T12:56:42.122242820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 12:56:42.129684 containerd[1534]: time="2025-01-30T12:56:42.129635690Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:42.130423 containerd[1534]: time="2025-01-30T12:56:42.130389701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.817763132s" Jan 30 12:56:42.130463 containerd[1534]: time="2025-01-30T12:56:42.130423581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 12:56:42.130912 containerd[1534]: time="2025-01-30T12:56:42.130883108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:42.138546 containerd[1534]: time="2025-01-30T12:56:42.138432581Z" level=info msg="CreateContainer within sandbox \"5b0433c358f504d03ac5aefb86cf19cc544eedca683010f7201405560af22236\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 12:56:42.152364 containerd[1534]: time="2025-01-30T12:56:42.152317947Z" level=info msg="CreateContainer within sandbox \"5b0433c358f504d03ac5aefb86cf19cc544eedca683010f7201405560af22236\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"25378b93a70eefe649a6f44a19b0f611a25a3328cc735f5712b9afd8169f7714\"" Jan 30 12:56:42.153523 containerd[1534]: time="2025-01-30T12:56:42.153334523Z" level=info msg="StartContainer for \"25378b93a70eefe649a6f44a19b0f611a25a3328cc735f5712b9afd8169f7714\"" Jan 30 12:56:42.244172 containerd[1534]: time="2025-01-30T12:56:42.243802070Z" level=info msg="StartContainer for \"25378b93a70eefe649a6f44a19b0f611a25a3328cc735f5712b9afd8169f7714\" returns successfully" Jan 30 12:56:42.359909 kubelet[2707]: E0130 12:56:42.359821 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:42.469489 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 12:56:42.469627 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 12:56:43.362529 kubelet[2707]: E0130 12:56:43.362485 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:46.724310 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:34510.service - OpenSSH per-connection server daemon (10.0.0.1:34510). Jan 30 12:56:46.770512 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 34510 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:46.771628 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:46.778128 systemd-logind[1515]: New session 9 of user core. Jan 30 12:56:46.786357 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 12:56:46.956072 sshd[4126]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:46.962749 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:34510.service: Deactivated successfully. Jan 30 12:56:46.965022 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 12:56:46.965650 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Jan 30 12:56:46.966656 systemd-logind[1515]: Removed session 9. Jan 30 12:56:49.621163 kubelet[2707]: I0130 12:56:49.621104 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:56:49.621910 kubelet[2707]: E0130 12:56:49.621734 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:49.648966 kubelet[2707]: I0130 12:56:49.648903 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9mfhh" podStartSLOduration=8.04799707 podStartE2EDuration="17.648884429s" podCreationTimestamp="2025-01-30 12:56:32 +0000 UTC" firstStartedPulling="2025-01-30 12:56:32.530816321 +0000 UTC m=+23.444726482" lastFinishedPulling="2025-01-30 12:56:42.13170368 +0000 UTC m=+33.045613841" observedRunningTime="2025-01-30 12:56:42.39218324 +0000 UTC m=+33.306093361" watchObservedRunningTime="2025-01-30 12:56:49.648884429 +0000 UTC m=+40.562794590" Jan 30 12:56:50.238078 kernel: bpftool[4236]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 12:56:50.378100 kubelet[2707]: E0130 12:56:50.378020 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:50.456165 systemd-networkd[1231]: vxlan.calico: Link UP Jan 30 12:56:50.456171 systemd-networkd[1231]: vxlan.calico: Gained carrier Jan 30 12:56:51.869280 systemd-networkd[1231]: vxlan.calico: Gained IPv6LL Jan 30 12:56:51.966328 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:34524.service - OpenSSH per-connection server daemon (10.0.0.1:34524). Jan 30 12:56:52.008366 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 34524 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:52.010001 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:52.014113 systemd-logind[1515]: New session 10 of user core. Jan 30 12:56:52.021343 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 12:56:52.185827 sshd[4356]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:52.193426 containerd[1534]: time="2025-01-30T12:56:52.193385201Z" level=info msg="StopPodSandbox for \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\"" Jan 30 12:56:52.196405 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:34534.service - OpenSSH per-connection server daemon (10.0.0.1:34534). Jan 30 12:56:52.196868 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:34524.service: Deactivated successfully. Jan 30 12:56:52.201956 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 12:56:52.211265 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Jan 30 12:56:52.212627 systemd-logind[1515]: Removed session 10. Jan 30 12:56:52.244644 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 34534 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:52.245840 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:52.259474 systemd-logind[1515]: New session 11 of user core. Jan 30 12:56:52.274532 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.306 [INFO][4387] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.307 [INFO][4387] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" iface="eth0" netns="/var/run/netns/cni-b2768b73-365a-22ce-27a1-900a447d5a91" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.311 [INFO][4387] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" iface="eth0" netns="/var/run/netns/cni-b2768b73-365a-22ce-27a1-900a447d5a91" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.312 [INFO][4387] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" iface="eth0" netns="/var/run/netns/cni-b2768b73-365a-22ce-27a1-900a447d5a91" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.312 [INFO][4387] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.313 [INFO][4387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.488 [INFO][4398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.488 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.488 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.503 [WARNING][4398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.503 [INFO][4398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.505 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:52.509022 containerd[1534]: 2025-01-30 12:56:52.507 [INFO][4387] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:56:52.509851 containerd[1534]: time="2025-01-30T12:56:52.509702391Z" level=info msg="TearDown network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\" successfully" Jan 30 12:56:52.509851 containerd[1534]: time="2025-01-30T12:56:52.509734672Z" level=info msg="StopPodSandbox for \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\" returns successfully" Jan 30 12:56:52.510640 kubelet[2707]: E0130 12:56:52.510615 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:52.512333 containerd[1534]: time="2025-01-30T12:56:52.512080938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jqtfs,Uid:cf7d4427-4304-4d39-a328-49dbc3c64e9c,Namespace:kube-system,Attempt:1,}" Jan 30 12:56:52.513491 systemd[1]: run-netns-cni\x2db2768b73\x2d365a\x2d22ce\x2d27a1\x2d900a447d5a91.mount: Deactivated successfully. Jan 30 12:56:52.604801 sshd[4369]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:52.613477 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:36566.service - OpenSSH per-connection server daemon (10.0.0.1:36566). Jan 30 12:56:52.614133 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:34534.service: Deactivated successfully. Jan 30 12:56:52.619501 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Jan 30 12:56:52.624941 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 12:56:52.629611 systemd-logind[1515]: Removed session 11. Jan 30 12:56:52.680219 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 36566 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:52.682303 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:52.688280 systemd-logind[1515]: New session 12 of user core. Jan 30 12:56:52.700503 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 12:56:52.706471 systemd-networkd[1231]: cali00547228d38: Link UP Jan 30 12:56:52.707076 systemd-networkd[1231]: cali00547228d38: Gained carrier Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.587 [INFO][4412] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0 coredns-7db6d8ff4d- kube-system cf7d4427-4304-4d39-a328-49dbc3c64e9c 849 0 2025-01-30 12:56:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-jqtfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali00547228d38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.587 [INFO][4412] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.649 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" HandleID="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.664 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" HandleID="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005281e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-jqtfs", "timestamp":"2025-01-30 12:56:52.649371902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.665 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.665 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.665 [INFO][4427] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.666 [INFO][4427] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.677 [INFO][4427] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.683 [INFO][4427] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.686 [INFO][4427] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.688 [INFO][4427] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.688 [INFO][4427] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.690 [INFO][4427] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249 Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.694 [INFO][4427] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.700 [INFO][4427] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.700 [INFO][4427] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" host="localhost" Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.700 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:52.731914 containerd[1534]: 2025-01-30 12:56:52.700 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" HandleID="k8s-pod-network.5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.732670 containerd[1534]: 2025-01-30 12:56:52.703 [INFO][4412] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf7d4427-4304-4d39-a328-49dbc3c64e9c", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-jqtfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00547228d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:52.732670 containerd[1534]: 2025-01-30 12:56:52.703 [INFO][4412] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.732670 containerd[1534]: 2025-01-30 12:56:52.703 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00547228d38 ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.732670 containerd[1534]: 2025-01-30 12:56:52.708 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.732670 containerd[1534]: 2025-01-30 12:56:52.709 [INFO][4412] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf7d4427-4304-4d39-a328-49dbc3c64e9c", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249", Pod:"coredns-7db6d8ff4d-jqtfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00547228d38", MAC:"fe:5c:a4:b6:93:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:52.732670 containerd[1534]: 2025-01-30 12:56:52.729 [INFO][4412] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jqtfs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:56:52.760977 containerd[1534]: time="2025-01-30T12:56:52.760795498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:52.760977 containerd[1534]: time="2025-01-30T12:56:52.760869219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:52.760977 containerd[1534]: time="2025-01-30T12:56:52.760907660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:52.762084 containerd[1534]: time="2025-01-30T12:56:52.761980111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:52.784338 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:52.808584 containerd[1534]: time="2025-01-30T12:56:52.808543708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jqtfs,Uid:cf7d4427-4304-4d39-a328-49dbc3c64e9c,Namespace:kube-system,Attempt:1,} returns sandbox id \"5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249\"" Jan 30 12:56:52.810266 kubelet[2707]: E0130 12:56:52.810197 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:52.813806 containerd[1534]: time="2025-01-30T12:56:52.813324561Z" level=info msg="CreateContainer within sandbox \"5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:56:52.923012 containerd[1534]: time="2025-01-30T12:56:52.922966538Z" level=info msg="CreateContainer within sandbox \"5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9779c2ee47e11602b4201afe92861d32c99f3db9ec44a6dea5b0724d0d45c40d\"" Jan 30 12:56:52.925182 containerd[1534]: time="2025-01-30T12:56:52.925100962Z" level=info msg="StartContainer for \"9779c2ee47e11602b4201afe92861d32c99f3db9ec44a6dea5b0724d0d45c40d\"" Jan 30 12:56:52.938292 sshd[4432]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:52.942748 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:36566.service: Deactivated successfully. Jan 30 12:56:52.945968 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Jan 30 12:56:52.946314 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 12:56:52.948934 systemd-logind[1515]: Removed session 12. Jan 30 12:56:53.004004 containerd[1534]: time="2025-01-30T12:56:53.003951156Z" level=info msg="StartContainer for \"9779c2ee47e11602b4201afe92861d32c99f3db9ec44a6dea5b0724d0d45c40d\" returns successfully" Jan 30 12:56:53.192599 containerd[1534]: time="2025-01-30T12:56:53.192459039Z" level=info msg="StopPodSandbox for \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\"" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.269 [INFO][4560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.269 [INFO][4560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" iface="eth0" netns="/var/run/netns/cni-4fa59e77-0aa2-fd60-7fe6-c0d6f32899b9" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.270 [INFO][4560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" iface="eth0" netns="/var/run/netns/cni-4fa59e77-0aa2-fd60-7fe6-c0d6f32899b9" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.270 [INFO][4560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" iface="eth0" netns="/var/run/netns/cni-4fa59e77-0aa2-fd60-7fe6-c0d6f32899b9" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.270 [INFO][4560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.270 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.296 [INFO][4568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.297 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.297 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.336 [WARNING][4568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.336 [INFO][4568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.338 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:53.343140 containerd[1534]: 2025-01-30 12:56:53.339 [INFO][4560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:56:53.343830 containerd[1534]: time="2025-01-30T12:56:53.343283194Z" level=info msg="TearDown network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\" successfully" Jan 30 12:56:53.343830 containerd[1534]: time="2025-01-30T12:56:53.343310594Z" level=info msg="StopPodSandbox for \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\" returns successfully" Jan 30 12:56:53.344820 containerd[1534]: time="2025-01-30T12:56:53.344741490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-jnhh7,Uid:c56171da-3422-46f6-bd03-661a74a240ac,Namespace:calico-apiserver,Attempt:1,}" Jan 30 12:56:53.383971 kubelet[2707]: E0130 12:56:53.383938 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:53.417239 kubelet[2707]: I0130 12:56:53.416883 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jqtfs" podStartSLOduration=30.416862951 podStartE2EDuration="30.416862951s" podCreationTimestamp="2025-01-30 12:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:53.400784017 +0000 UTC m=+44.314694178" watchObservedRunningTime="2025-01-30 12:56:53.416862951 +0000 UTC m=+44.330773112" Jan 30 12:56:53.514552 systemd[1]: run-netns-cni\x2d4fa59e77\x2d0aa2\x2dfd60\x2d7fe6\x2dc0d6f32899b9.mount: Deactivated successfully. Jan 30 12:56:53.521477 systemd-networkd[1231]: cali857f509b6e3: Link UP Jan 30 12:56:53.521812 systemd-networkd[1231]: cali857f509b6e3: Gained carrier Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.398 [INFO][4577] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0 calico-apiserver-5b884f9b9b- calico-apiserver c56171da-3422-46f6-bd03-661a74a240ac 877 0 2025-01-30 12:56:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b884f9b9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b884f9b9b-jnhh7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali857f509b6e3 [] []}} ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.398 [INFO][4577] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.461 [INFO][4592] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" HandleID="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.475 [INFO][4592] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" HandleID="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b884f9b9b-jnhh7", "timestamp":"2025-01-30 12:56:53.461907839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.475 [INFO][4592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.475 [INFO][4592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.475 [INFO][4592] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.477 [INFO][4592] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.483 [INFO][4592] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.488 [INFO][4592] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.490 [INFO][4592] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.494 [INFO][4592] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.494 [INFO][4592] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.496 [INFO][4592] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.503 [INFO][4592] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.517 [INFO][4592] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.517 [INFO][4592] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" host="localhost" Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.517 [INFO][4592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:53.539415 containerd[1534]: 2025-01-30 12:56:53.517 [INFO][4592] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" HandleID="k8s-pod-network.08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.540273 containerd[1534]: 2025-01-30 12:56:53.519 [INFO][4577] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c56171da-3422-46f6-bd03-661a74a240ac", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b884f9b9b-jnhh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857f509b6e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:53.540273 containerd[1534]: 2025-01-30 12:56:53.520 [INFO][4577] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.540273 containerd[1534]: 2025-01-30 12:56:53.520 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali857f509b6e3 ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.540273 containerd[1534]: 2025-01-30 12:56:53.521 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.540273 containerd[1534]: 2025-01-30 12:56:53.522 [INFO][4577] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c56171da-3422-46f6-bd03-661a74a240ac", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb", Pod:"calico-apiserver-5b884f9b9b-jnhh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857f509b6e3", MAC:"c6:6c:06:a8:70:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:53.540273 containerd[1534]: 2025-01-30 12:56:53.536 [INFO][4577] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-jnhh7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:56:53.564586 containerd[1534]: time="2025-01-30T12:56:53.564468391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:53.564586 containerd[1534]: time="2025-01-30T12:56:53.564543352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:53.564586 containerd[1534]: time="2025-01-30T12:56:53.564554912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:53.564850 containerd[1534]: time="2025-01-30T12:56:53.564663633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:53.587496 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:53.611985 containerd[1534]: time="2025-01-30T12:56:53.611932425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-jnhh7,Uid:c56171da-3422-46f6-bd03-661a74a240ac,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb\"" Jan 30 12:56:53.615350 containerd[1534]: time="2025-01-30T12:56:53.615284421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 12:56:53.917181 systemd-networkd[1231]: cali00547228d38: Gained IPv6LL Jan 30 12:56:54.192561 containerd[1534]: time="2025-01-30T12:56:54.192493990Z" level=info msg="StopPodSandbox for \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\"" Jan 30 12:56:54.192703 containerd[1534]: time="2025-01-30T12:56:54.192637431Z" level=info msg="StopPodSandbox for \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\"" Jan 30 12:56:54.192837 containerd[1534]: time="2025-01-30T12:56:54.192746393Z" level=info msg="StopPodSandbox for \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\"" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.263 [INFO][4709] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.263 [INFO][4709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" iface="eth0" netns="/var/run/netns/cni-c41b03f3-3fc2-a660-6a09-739f39b2a434" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.264 [INFO][4709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" iface="eth0" netns="/var/run/netns/cni-c41b03f3-3fc2-a660-6a09-739f39b2a434" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.267 [INFO][4709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" iface="eth0" netns="/var/run/netns/cni-c41b03f3-3fc2-a660-6a09-739f39b2a434" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.267 [INFO][4709] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.267 [INFO][4709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.293 [INFO][4736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.293 [INFO][4736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.293 [INFO][4736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.317 [WARNING][4736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.318 [INFO][4736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.321 [INFO][4736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:54.331124 containerd[1534]: 2025-01-30 12:56:54.324 [INFO][4709] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:56:54.333431 systemd[1]: run-netns-cni\x2dc41b03f3\x2d3fc2\x2da660\x2d6a09\x2d739f39b2a434.mount: Deactivated successfully. Jan 30 12:56:54.343053 containerd[1534]: time="2025-01-30T12:56:54.338726739Z" level=info msg="TearDown network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\" successfully" Jan 30 12:56:54.343053 containerd[1534]: time="2025-01-30T12:56:54.338767419Z" level=info msg="StopPodSandbox for \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\" returns successfully" Jan 30 12:56:54.343053 containerd[1534]: time="2025-01-30T12:56:54.339458267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-2wgjx,Uid:c8a23043-6bab-4618-a927-3b2c52ff66a4,Namespace:calico-apiserver,Attempt:1,}" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.255 [INFO][4704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.255 [INFO][4704] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" iface="eth0" netns="/var/run/netns/cni-8d8ab686-9f66-b908-3005-2ff7c726224e" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.255 [INFO][4704] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" iface="eth0" netns="/var/run/netns/cni-8d8ab686-9f66-b908-3005-2ff7c726224e" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.255 [INFO][4704] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" iface="eth0" netns="/var/run/netns/cni-8d8ab686-9f66-b908-3005-2ff7c726224e" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.255 [INFO][4704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.255 [INFO][4704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.306 [INFO][4731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.307 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.321 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.347 [WARNING][4731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.347 [INFO][4731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.350 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:54.359923 containerd[1534]: 2025-01-30 12:56:54.355 [INFO][4704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:56:54.361192 containerd[1534]: time="2025-01-30T12:56:54.361152136Z" level=info msg="TearDown network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\" successfully" Jan 30 12:56:54.361192 containerd[1534]: time="2025-01-30T12:56:54.361189457Z" level=info msg="StopPodSandbox for \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\" returns successfully" Jan 30 12:56:54.365002 systemd[1]: run-netns-cni\x2d8d8ab686\x2d9f66\x2db908\x2d3005\x2d2ff7c726224e.mount: Deactivated successfully. Jan 30 12:56:54.366752 containerd[1534]: time="2025-01-30T12:56:54.366697835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d5ff6878-vn947,Uid:f5238e11-c884-4750-8685-6bc2db2bcd69,Namespace:calico-system,Attempt:1,}" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.284 [INFO][4702] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.284 [INFO][4702] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" iface="eth0" netns="/var/run/netns/cni-a53a5098-513a-d702-d3b5-1d5364052f3d" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.284 [INFO][4702] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" iface="eth0" netns="/var/run/netns/cni-a53a5098-513a-d702-d3b5-1d5364052f3d" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.285 [INFO][4702] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" iface="eth0" netns="/var/run/netns/cni-a53a5098-513a-d702-d3b5-1d5364052f3d" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.285 [INFO][4702] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.285 [INFO][4702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.320 [INFO][4743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.320 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.350 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.366 [WARNING][4743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.366 [INFO][4743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.369 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:54.374374 containerd[1534]: 2025-01-30 12:56:54.371 [INFO][4702] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:56:54.374763 containerd[1534]: time="2025-01-30T12:56:54.374486318Z" level=info msg="TearDown network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\" successfully" Jan 30 12:56:54.374763 containerd[1534]: time="2025-01-30T12:56:54.374511998Z" level=info msg="StopPodSandbox for \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\" returns successfully" Jan 30 12:56:54.374881 kubelet[2707]: E0130 12:56:54.374855 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:54.375587 containerd[1534]: time="2025-01-30T12:56:54.375386087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlbrv,Uid:215e9ede-a56b-419e-b3ac-485389adba02,Namespace:kube-system,Attempt:1,}" Jan 30 12:56:54.389560 kubelet[2707]: E0130 12:56:54.389524 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:54.516917 systemd[1]: run-netns-cni\x2da53a5098\x2d513a\x2dd702\x2dd3b5\x2d1d5364052f3d.mount: Deactivated successfully. Jan 30 12:56:54.600562 systemd-networkd[1231]: cali0e89bb5c62d: Link UP Jan 30 12:56:54.600704 systemd-networkd[1231]: cali0e89bb5c62d: Gained carrier Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.466 [INFO][4766] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0 coredns-7db6d8ff4d- kube-system 215e9ede-a56b-419e-b3ac-485389adba02 902 0 2025-01-30 12:56:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-mlbrv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e89bb5c62d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.466 [INFO][4766] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.511 [INFO][4803] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" HandleID="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.533 [INFO][4803] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" HandleID="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003616d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-mlbrv", "timestamp":"2025-01-30 12:56:54.511764812 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.533 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.533 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.533 [INFO][4803] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.537 [INFO][4803] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.543 [INFO][4803] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.552 [INFO][4803] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.554 [INFO][4803] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.558 [INFO][4803] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.558 [INFO][4803] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.560 [INFO][4803] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61 Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.566 [INFO][4803] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.576 [INFO][4803] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.576 [INFO][4803] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" host="localhost" Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.576 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:54.620482 containerd[1534]: 2025-01-30 12:56:54.576 [INFO][4803] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" HandleID="k8s-pod-network.184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.621011 containerd[1534]: 2025-01-30 12:56:54.589 [INFO][4766] cni-plugin/k8s.go 386: Populated endpoint ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"215e9ede-a56b-419e-b3ac-485389adba02", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-mlbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e89bb5c62d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:54.621011 containerd[1534]: 2025-01-30 12:56:54.589 [INFO][4766] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.621011 containerd[1534]: 2025-01-30 12:56:54.589 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e89bb5c62d ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.621011 containerd[1534]: 2025-01-30 12:56:54.597 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.621011 containerd[1534]: 2025-01-30 12:56:54.599 [INFO][4766] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"215e9ede-a56b-419e-b3ac-485389adba02", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61", Pod:"coredns-7db6d8ff4d-mlbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e89bb5c62d", MAC:"b2:93:19:d1:9f:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:54.621011 containerd[1534]: 2025-01-30 12:56:54.615 [INFO][4766] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlbrv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:56:54.656777 systemd-networkd[1231]: cali891f8a57a80: Link UP Jan 30 12:56:54.657911 systemd-networkd[1231]: cali891f8a57a80: Gained carrier Jan 30 12:56:54.674050 containerd[1534]: time="2025-01-30T12:56:54.672918759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:54.674050 containerd[1534]: time="2025-01-30T12:56:54.673385444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:54.674050 containerd[1534]: time="2025-01-30T12:56:54.673410364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:54.674050 containerd[1534]: time="2025-01-30T12:56:54.673525045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.466 [INFO][4755] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0 calico-apiserver-5b884f9b9b- calico-apiserver c8a23043-6bab-4618-a927-3b2c52ff66a4 901 0 2025-01-30 12:56:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b884f9b9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b884f9b9b-2wgjx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali891f8a57a80 [] []}} ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.467 [INFO][4755] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.525 [INFO][4797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" HandleID="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.542 [INFO][4797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" HandleID="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000292090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b884f9b9b-2wgjx", "timestamp":"2025-01-30 12:56:54.525270355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.542 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.576 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.576 [INFO][4797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.579 [INFO][4797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.586 [INFO][4797] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.593 [INFO][4797] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.597 [INFO][4797] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.605 [INFO][4797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.605 [INFO][4797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.611 [INFO][4797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.621 [INFO][4797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.648 [INFO][4797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.649 [INFO][4797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" host="localhost" Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.649 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:54.684498 containerd[1534]: 2025-01-30 12:56:54.649 [INFO][4797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" HandleID="k8s-pod-network.38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.685161 containerd[1534]: 2025-01-30 12:56:54.653 [INFO][4755] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8a23043-6bab-4618-a927-3b2c52ff66a4", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b884f9b9b-2wgjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali891f8a57a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:54.685161 containerd[1534]: 2025-01-30 12:56:54.653 [INFO][4755] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.685161 containerd[1534]: 2025-01-30 12:56:54.653 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali891f8a57a80 ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.685161 containerd[1534]: 2025-01-30 12:56:54.659 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.685161 containerd[1534]: 2025-01-30 12:56:54.661 [INFO][4755] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8a23043-6bab-4618-a927-3b2c52ff66a4", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe", Pod:"calico-apiserver-5b884f9b9b-2wgjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali891f8a57a80", MAC:"b2:98:21:80:aa:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:54.685161 containerd[1534]: 2025-01-30 12:56:54.676 [INFO][4755] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe" Namespace="calico-apiserver" Pod="calico-apiserver-5b884f9b9b-2wgjx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:56:54.718747 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:54.740789 systemd-networkd[1231]: cali69ddb6244a6: Link UP Jan 30 12:56:54.741970 systemd-networkd[1231]: cali69ddb6244a6: Gained carrier Jan 30 12:56:54.752053 containerd[1534]: time="2025-01-30T12:56:54.751964836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlbrv,Uid:215e9ede-a56b-419e-b3ac-485389adba02,Namespace:kube-system,Attempt:1,} returns sandbox id \"184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61\"" Jan 30 12:56:54.758304 kubelet[2707]: E0130 12:56:54.755151 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:54.759554 containerd[1534]: time="2025-01-30T12:56:54.759345914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:54.759554 containerd[1534]: time="2025-01-30T12:56:54.759460675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:54.759554 containerd[1534]: time="2025-01-30T12:56:54.759474395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:54.760562 containerd[1534]: time="2025-01-30T12:56:54.760472526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:54.762817 containerd[1534]: time="2025-01-30T12:56:54.762774590Z" level=info msg="CreateContainer within sandbox \"184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.489 [INFO][4779] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0 calico-kube-controllers-69d5ff6878- calico-system f5238e11-c884-4750-8685-6bc2db2bcd69 900 0 2025-01-30 12:56:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69d5ff6878 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69d5ff6878-vn947 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali69ddb6244a6 [] []}} ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.490 [INFO][4779] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.547 [INFO][4811] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" HandleID="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.559 [INFO][4811] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" HandleID="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69d5ff6878-vn947", "timestamp":"2025-01-30 12:56:54.547275188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.559 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.649 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.649 [INFO][4811] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.653 [INFO][4811] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.662 [INFO][4811] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.690 [INFO][4811] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.697 [INFO][4811] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.700 [INFO][4811] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.701 [INFO][4811] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.706 [INFO][4811] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92 Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.718 [INFO][4811] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.730 [INFO][4811] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.730 [INFO][4811] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" host="localhost" Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.730 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:54.768344 containerd[1534]: 2025-01-30 12:56:54.730 [INFO][4811] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" HandleID="k8s-pod-network.69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.768894 containerd[1534]: 2025-01-30 12:56:54.736 [INFO][4779] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0", GenerateName:"calico-kube-controllers-69d5ff6878-", Namespace:"calico-system", SelfLink:"", UID:"f5238e11-c884-4750-8685-6bc2db2bcd69", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d5ff6878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69d5ff6878-vn947", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69ddb6244a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:54.768894 containerd[1534]: 2025-01-30 12:56:54.737 [INFO][4779] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.768894 containerd[1534]: 2025-01-30 12:56:54.737 [INFO][4779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69ddb6244a6 ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.768894 containerd[1534]: 2025-01-30 12:56:54.740 [INFO][4779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.768894 containerd[1534]: 2025-01-30 12:56:54.743 [INFO][4779] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0", GenerateName:"calico-kube-controllers-69d5ff6878-", Namespace:"calico-system", SelfLink:"", UID:"f5238e11-c884-4750-8685-6bc2db2bcd69", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d5ff6878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92", Pod:"calico-kube-controllers-69d5ff6878-vn947", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69ddb6244a6", MAC:"36:b0:dc:7c:af:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:54.768894 containerd[1534]: 2025-01-30 12:56:54.754 [INFO][4779] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92" Namespace="calico-system" Pod="calico-kube-controllers-69d5ff6878-vn947" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:56:54.798942 containerd[1534]: time="2025-01-30T12:56:54.798445488Z" level=info msg="CreateContainer within sandbox \"184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e34e5e1ed29c985f1a8671994321e0157d2dd3279b74b88329dc6833a2e39f67\"" Jan 30 12:56:54.800304 containerd[1534]: time="2025-01-30T12:56:54.800265108Z" level=info msg="StartContainer for \"e34e5e1ed29c985f1a8671994321e0157d2dd3279b74b88329dc6833a2e39f67\"" Jan 30 12:56:54.803211 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:54.807938 containerd[1534]: time="2025-01-30T12:56:54.806488973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:54.807938 containerd[1534]: time="2025-01-30T12:56:54.806583574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:54.807938 containerd[1534]: time="2025-01-30T12:56:54.806598975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:54.807938 containerd[1534]: time="2025-01-30T12:56:54.806766536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:54.845214 containerd[1534]: time="2025-01-30T12:56:54.845012781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b884f9b9b-2wgjx,Uid:c8a23043-6bab-4618-a927-3b2c52ff66a4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe\"" Jan 30 12:56:54.850771 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:54.871572 containerd[1534]: time="2025-01-30T12:56:54.871517822Z" level=info msg="StartContainer for \"e34e5e1ed29c985f1a8671994321e0157d2dd3279b74b88329dc6833a2e39f67\" returns successfully" Jan 30 12:56:54.884162 containerd[1534]: time="2025-01-30T12:56:54.883903873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d5ff6878-vn947,Uid:f5238e11-c884-4750-8685-6bc2db2bcd69,Namespace:calico-system,Attempt:1,} returns sandbox id \"69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92\"" Jan 30 12:56:55.193457 containerd[1534]: time="2025-01-30T12:56:55.193368667Z" level=info msg="StopPodSandbox for \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\"" Jan 30 12:56:55.244239 containerd[1534]: time="2025-01-30T12:56:55.244191714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:55.246074 containerd[1534]: time="2025-01-30T12:56:55.244911081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 12:56:55.246074 containerd[1534]: time="2025-01-30T12:56:55.245616328Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:55.248330 containerd[1534]: time="2025-01-30T12:56:55.248291476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.632942254s" Jan 30 12:56:55.248330 containerd[1534]: time="2025-01-30T12:56:55.248329437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 12:56:55.248884 containerd[1534]: time="2025-01-30T12:56:55.248850322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:55.250917 containerd[1534]: time="2025-01-30T12:56:55.250888263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 12:56:55.252374 containerd[1534]: time="2025-01-30T12:56:55.252336558Z" level=info msg="CreateContainer within sandbox \"08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 12:56:55.263867 containerd[1534]: time="2025-01-30T12:56:55.263602155Z" level=info msg="CreateContainer within sandbox \"08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bdd37921c8ae1193dbba6da3bfd5c9a94f7c5583c5a4f0c904f54dc6573f5c36\"" Jan 30 12:56:55.268251 containerd[1534]: time="2025-01-30T12:56:55.268204003Z" level=info msg="StartContainer for \"bdd37921c8ae1193dbba6da3bfd5c9a94f7c5583c5a4f0c904f54dc6573f5c36\"" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.256 [INFO][5044] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.256 [INFO][5044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" iface="eth0" netns="/var/run/netns/cni-308ffede-8641-dc47-768f-423e0de6ffdb" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.256 [INFO][5044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" iface="eth0" netns="/var/run/netns/cni-308ffede-8641-dc47-768f-423e0de6ffdb" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.257 [INFO][5044] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" iface="eth0" netns="/var/run/netns/cni-308ffede-8641-dc47-768f-423e0de6ffdb" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.257 [INFO][5044] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.257 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.280 [INFO][5056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.280 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.280 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.289 [WARNING][5056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.289 [INFO][5056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.294 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:55.299863 containerd[1534]: 2025-01-30 12:56:55.296 [INFO][5044] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:56:55.299863 containerd[1534]: time="2025-01-30T12:56:55.299049202Z" level=info msg="TearDown network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\" successfully" Jan 30 12:56:55.299863 containerd[1534]: time="2025-01-30T12:56:55.299079842Z" level=info msg="StopPodSandbox for \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\" returns successfully" Jan 30 12:56:55.299863 containerd[1534]: time="2025-01-30T12:56:55.299690729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwjj9,Uid:41953e49-b598-4079-bbdf-ef9c599ebe81,Namespace:calico-system,Attempt:1,}" Jan 30 12:56:55.343655 containerd[1534]: time="2025-01-30T12:56:55.343604184Z" level=info msg="StartContainer for \"bdd37921c8ae1193dbba6da3bfd5c9a94f7c5583c5a4f0c904f54dc6573f5c36\" returns successfully" Jan 30 12:56:55.390476 systemd-networkd[1231]: cali857f509b6e3: Gained IPv6LL Jan 30 12:56:55.396445 kubelet[2707]: E0130 12:56:55.396413 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:55.405280 kubelet[2707]: E0130 12:56:55.405182 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:55.423264 kubelet[2707]: I0130 12:56:55.423179 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mlbrv" podStartSLOduration=32.423161048 podStartE2EDuration="32.423161048s" podCreationTimestamp="2025-01-30 12:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:55.422738724 +0000 UTC m=+46.336648885" watchObservedRunningTime="2025-01-30 12:56:55.423161048 +0000 UTC m=+46.337071209" Jan 30 12:56:55.460629 kubelet[2707]: I0130 12:56:55.460499 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b884f9b9b-jnhh7" podStartSLOduration=21.823493738 podStartE2EDuration="23.460129471s" podCreationTimestamp="2025-01-30 12:56:32 +0000 UTC" firstStartedPulling="2025-01-30 12:56:53.613168799 +0000 UTC m=+44.527078960" lastFinishedPulling="2025-01-30 12:56:55.249804532 +0000 UTC m=+46.163714693" observedRunningTime="2025-01-30 12:56:55.459446184 +0000 UTC m=+46.373356305" watchObservedRunningTime="2025-01-30 12:56:55.460129471 +0000 UTC m=+46.374039632" Jan 30 12:56:55.493763 systemd-networkd[1231]: cali0a66901c117: Link UP Jan 30 12:56:55.495370 containerd[1534]: time="2025-01-30T12:56:55.494599589Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:55.495001 systemd-networkd[1231]: cali0a66901c117: Gained carrier Jan 30 12:56:55.503133 containerd[1534]: time="2025-01-30T12:56:55.502430470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 12:56:55.506535 containerd[1534]: time="2025-01-30T12:56:55.506269069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 255.340246ms" Jan 30 12:56:55.506535 containerd[1534]: time="2025-01-30T12:56:55.506315910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 12:56:55.511115 containerd[1534]: time="2025-01-30T12:56:55.509091059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.358 [INFO][5086] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xwjj9-eth0 csi-node-driver- calico-system 41953e49-b598-4079-bbdf-ef9c599ebe81 934 0 2025-01-30 12:56:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xwjj9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0a66901c117 [] []}} ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.358 [INFO][5086] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.393 [INFO][5109] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" HandleID="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.420 [INFO][5109] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" HandleID="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000431d70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xwjj9", "timestamp":"2025-01-30 12:56:55.391775603 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.420 [INFO][5109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.420 [INFO][5109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.420 [INFO][5109] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.426 [INFO][5109] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.433 [INFO][5109] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.447 [INFO][5109] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.454 [INFO][5109] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.457 [INFO][5109] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.457 [INFO][5109] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.465 [INFO][5109] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818 Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.471 [INFO][5109] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.481 [INFO][5109] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.481 [INFO][5109] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" host="localhost" Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.481 [INFO][5109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:56:55.511115 containerd[1534]: 2025-01-30 12:56:55.481 [INFO][5109] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" HandleID="k8s-pod-network.e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.511676 containerd[1534]: 2025-01-30 12:56:55.489 [INFO][5086] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xwjj9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41953e49-b598-4079-bbdf-ef9c599ebe81", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xwjj9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a66901c117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:55.511676 containerd[1534]: 2025-01-30 12:56:55.490 [INFO][5086] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.511676 containerd[1534]: 2025-01-30 12:56:55.490 [INFO][5086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a66901c117 ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.511676 containerd[1534]: 2025-01-30 12:56:55.491 [INFO][5086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.511676 containerd[1534]: 2025-01-30 12:56:55.492 [INFO][5086] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xwjj9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41953e49-b598-4079-bbdf-ef9c599ebe81", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818", Pod:"csi-node-driver-xwjj9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a66901c117", MAC:"86:a4:22:e2:f3:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:56:55.511676 containerd[1534]: 2025-01-30 12:56:55.505 [INFO][5086] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818" Namespace="calico-system" Pod="csi-node-driver-xwjj9" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:56:55.520309 systemd[1]: run-containerd-runc-k8s.io-38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe-runc.KFyAfr.mount: Deactivated successfully. Jan 30 12:56:55.520470 systemd[1]: run-netns-cni\x2d308ffede\x2d8641\x2ddc47\x2d768f\x2d423e0de6ffdb.mount: Deactivated successfully. Jan 30 12:56:55.529496 containerd[1534]: time="2025-01-30T12:56:55.529445470Z" level=info msg="CreateContainer within sandbox \"38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 12:56:55.552899 containerd[1534]: time="2025-01-30T12:56:55.552852872Z" level=info msg="CreateContainer within sandbox \"38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d06f0584fbeb2a95698c14d807ede595f3bba2c294735d3a453d8dd5d1e0b2e2\"" Jan 30 12:56:55.554132 containerd[1534]: time="2025-01-30T12:56:55.553603440Z" level=info msg="StartContainer for \"d06f0584fbeb2a95698c14d807ede595f3bba2c294735d3a453d8dd5d1e0b2e2\"" Jan 30 12:56:55.570475 containerd[1534]: time="2025-01-30T12:56:55.570365014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:55.570994 containerd[1534]: time="2025-01-30T12:56:55.570949860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:55.571250 containerd[1534]: time="2025-01-30T12:56:55.571106101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:55.571344 containerd[1534]: time="2025-01-30T12:56:55.571283743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:55.621739 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:56:55.645644 containerd[1534]: time="2025-01-30T12:56:55.645603033Z" level=info msg="StartContainer for \"d06f0584fbeb2a95698c14d807ede595f3bba2c294735d3a453d8dd5d1e0b2e2\" returns successfully" Jan 30 12:56:55.647474 containerd[1534]: time="2025-01-30T12:56:55.647441692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwjj9,Uid:41953e49-b598-4079-bbdf-ef9c599ebe81,Namespace:calico-system,Attempt:1,} returns sandbox id \"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818\"" Jan 30 12:56:55.709300 systemd-networkd[1231]: cali891f8a57a80: Gained IPv6LL Jan 30 12:56:55.837672 systemd-networkd[1231]: cali69ddb6244a6: Gained IPv6LL Jan 30 12:56:55.901626 systemd-networkd[1231]: cali0e89bb5c62d: Gained IPv6LL Jan 30 12:56:56.424063 kubelet[2707]: I0130 12:56:56.423050 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:56:56.425042 kubelet[2707]: E0130 12:56:56.424660 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:56.439533 kubelet[2707]: I0130 12:56:56.438969 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b884f9b9b-2wgjx" podStartSLOduration=23.779589452 podStartE2EDuration="24.43895152s" podCreationTimestamp="2025-01-30 12:56:32 +0000 UTC" firstStartedPulling="2025-01-30 12:56:54.849071264 +0000 UTC m=+45.762981425" lastFinishedPulling="2025-01-30 12:56:55.508433372 +0000 UTC m=+46.422343493" observedRunningTime="2025-01-30 12:56:56.438450715 +0000 UTC m=+47.352360876" watchObservedRunningTime="2025-01-30 12:56:56.43895152 +0000 UTC m=+47.352861681" Jan 30 12:56:56.819921 containerd[1534]: time="2025-01-30T12:56:56.819865425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:56.820692 containerd[1534]: time="2025-01-30T12:56:56.820641473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 12:56:56.821423 containerd[1534]: time="2025-01-30T12:56:56.821383640Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:56.823474 containerd[1534]: time="2025-01-30T12:56:56.823437581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:56.824133 containerd[1534]: time="2025-01-30T12:56:56.824097948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.314963609s" Jan 30 12:56:56.824168 containerd[1534]: time="2025-01-30T12:56:56.824133148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 12:56:56.825745 containerd[1534]: time="2025-01-30T12:56:56.825720964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 12:56:56.833992 containerd[1534]: time="2025-01-30T12:56:56.833939328Z" level=info msg="CreateContainer within sandbox \"69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 12:56:56.850508 containerd[1534]: time="2025-01-30T12:56:56.850456055Z" level=info msg="CreateContainer within sandbox \"69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"922cbe4a6b0be70637d164e20546f7597d45d58e5ee1224f93b53d1fc9932af0\"" Jan 30 12:56:56.851238 containerd[1534]: time="2025-01-30T12:56:56.851203223Z" level=info msg="StartContainer for \"922cbe4a6b0be70637d164e20546f7597d45d58e5ee1224f93b53d1fc9932af0\"" Jan 30 12:56:56.862294 systemd-networkd[1231]: cali0a66901c117: Gained IPv6LL Jan 30 12:56:56.915392 containerd[1534]: time="2025-01-30T12:56:56.915337873Z" level=info msg="StartContainer for \"922cbe4a6b0be70637d164e20546f7597d45d58e5ee1224f93b53d1fc9932af0\" returns successfully" Jan 30 12:56:57.428206 kubelet[2707]: I0130 12:56:57.427524 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:56:57.428621 kubelet[2707]: E0130 12:56:57.428376 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:57.441176 kubelet[2707]: I0130 12:56:57.441110 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69d5ff6878-vn947" podStartSLOduration=23.503102828 podStartE2EDuration="25.441079039s" podCreationTimestamp="2025-01-30 12:56:32 +0000 UTC" firstStartedPulling="2025-01-30 12:56:54.887009186 +0000 UTC m=+45.800919347" lastFinishedPulling="2025-01-30 12:56:56.824985397 +0000 UTC m=+47.738895558" observedRunningTime="2025-01-30 12:56:57.439405742 +0000 UTC m=+48.353315903" watchObservedRunningTime="2025-01-30 12:56:57.441079039 +0000 UTC m=+48.354989200" Jan 30 12:56:57.728008 containerd[1534]: time="2025-01-30T12:56:57.727938452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:57.728833 containerd[1534]: time="2025-01-30T12:56:57.728799700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 12:56:57.730029 containerd[1534]: time="2025-01-30T12:56:57.729995512Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:57.732661 containerd[1534]: time="2025-01-30T12:56:57.732603618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:57.733442 containerd[1534]: time="2025-01-30T12:56:57.733405706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 907.653702ms" Jan 30 12:56:57.733442 containerd[1534]: time="2025-01-30T12:56:57.733442427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 12:56:57.735694 containerd[1534]: time="2025-01-30T12:56:57.735596448Z" level=info msg="CreateContainer within sandbox \"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 12:56:57.751764 containerd[1534]: time="2025-01-30T12:56:57.751712488Z" level=info msg="CreateContainer within sandbox \"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"706bd73a5a4c7ac09adbe6cb37db6e02c564b4010043aaba1b16c9544a8d8c00\"" Jan 30 12:56:57.752539 containerd[1534]: time="2025-01-30T12:56:57.752240414Z" level=info msg="StartContainer for \"706bd73a5a4c7ac09adbe6cb37db6e02c564b4010043aaba1b16c9544a8d8c00\"" Jan 30 12:56:57.804978 containerd[1534]: time="2025-01-30T12:56:57.804923817Z" level=info msg="StartContainer for \"706bd73a5a4c7ac09adbe6cb37db6e02c564b4010043aaba1b16c9544a8d8c00\" returns successfully" Jan 30 12:56:57.806320 containerd[1534]: time="2025-01-30T12:56:57.806294191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 12:56:57.952336 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:36572.service - OpenSSH per-connection server daemon (10.0.0.1:36572). Jan 30 12:56:57.996686 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:56:57.998463 sshd[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:58.003967 systemd-logind[1515]: New session 13 of user core. Jan 30 12:56:58.010350 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 12:56:58.211720 sshd[5304]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:58.214828 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:36572.service: Deactivated successfully. Jan 30 12:56:58.217401 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Jan 30 12:56:58.217567 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 12:56:58.218528 systemd-logind[1515]: Removed session 13. Jan 30 12:56:58.430894 kubelet[2707]: I0130 12:56:58.430790 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:56:58.948266 containerd[1534]: time="2025-01-30T12:56:58.948220128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:58.949132 containerd[1534]: time="2025-01-30T12:56:58.948892134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 12:56:58.949947 containerd[1534]: time="2025-01-30T12:56:58.949909384Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:58.958412 containerd[1534]: time="2025-01-30T12:56:58.958372267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:58.959007 containerd[1534]: time="2025-01-30T12:56:58.958966233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.152635921s" Jan 30 12:56:58.959007 containerd[1534]: time="2025-01-30T12:56:58.959004033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 12:56:58.961368 containerd[1534]: time="2025-01-30T12:56:58.961338016Z" level=info msg="CreateContainer within sandbox \"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 12:56:58.973769 containerd[1534]: time="2025-01-30T12:56:58.973522055Z" level=info msg="CreateContainer within sandbox \"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5bcd37fe387fa814ce74883e477676ab4e7be9fee3152cb16bb4fff815d09018\"" Jan 30 12:56:58.974441 containerd[1534]: time="2025-01-30T12:56:58.974411063Z" level=info msg="StartContainer for \"5bcd37fe387fa814ce74883e477676ab4e7be9fee3152cb16bb4fff815d09018\"" Jan 30 12:56:59.074655 containerd[1534]: time="2025-01-30T12:56:59.074604068Z" level=info msg="StartContainer for \"5bcd37fe387fa814ce74883e477676ab4e7be9fee3152cb16bb4fff815d09018\" returns successfully" Jan 30 12:56:59.303205 kubelet[2707]: I0130 12:56:59.303091 2707 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 12:56:59.304918 kubelet[2707]: I0130 12:56:59.304886 2707 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 12:56:59.459942 kubelet[2707]: I0130 12:56:59.459299 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xwjj9" podStartSLOduration=24.149757433 podStartE2EDuration="27.459282472s" podCreationTimestamp="2025-01-30 12:56:32 +0000 UTC" firstStartedPulling="2025-01-30 12:56:55.650298402 +0000 UTC m=+46.564208563" lastFinishedPulling="2025-01-30 12:56:58.959823441 +0000 UTC m=+49.873733602" observedRunningTime="2025-01-30 12:56:59.458645066 +0000 UTC m=+50.372555227" watchObservedRunningTime="2025-01-30 12:56:59.459282472 +0000 UTC m=+50.373192633" Jan 30 12:57:01.098617 kubelet[2707]: I0130 12:57:01.098509 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:57:01.122643 systemd[1]: run-containerd-runc-k8s.io-922cbe4a6b0be70637d164e20546f7597d45d58e5ee1224f93b53d1fc9932af0-runc.nTYC2A.mount: Deactivated successfully. Jan 30 12:57:03.230383 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:35550.service - OpenSSH per-connection server daemon (10.0.0.1:35550). Jan 30 12:57:03.295627 sshd[5409]: Accepted publickey for core from 10.0.0.1 port 35550 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:03.298106 sshd[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:03.305521 systemd-logind[1515]: New session 14 of user core. Jan 30 12:57:03.317053 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 12:57:03.514111 sshd[5409]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:03.517709 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:35550.service: Deactivated successfully. Jan 30 12:57:03.520319 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Jan 30 12:57:03.520634 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 12:57:03.522125 systemd-logind[1515]: Removed session 14. Jan 30 12:57:08.526351 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:35554.service - OpenSSH per-connection server daemon (10.0.0.1:35554). Jan 30 12:57:08.567108 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 35554 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:08.570056 sshd[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:08.583066 systemd-logind[1515]: New session 15 of user core. Jan 30 12:57:08.596451 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 12:57:08.746315 sshd[5447]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:08.751621 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:35554.service: Deactivated successfully. Jan 30 12:57:08.756795 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 12:57:08.758183 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Jan 30 12:57:08.759454 systemd-logind[1515]: Removed session 15. Jan 30 12:57:09.179580 containerd[1534]: time="2025-01-30T12:57:09.179432853Z" level=info msg="StopPodSandbox for \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\"" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.229 [WARNING][5477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c56171da-3422-46f6-bd03-661a74a240ac", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb", Pod:"calico-apiserver-5b884f9b9b-jnhh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857f509b6e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.229 [INFO][5477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.229 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" iface="eth0" netns="" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.229 [INFO][5477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.229 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.255 [INFO][5487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.255 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.255 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.264 [WARNING][5487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.264 [INFO][5487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.266 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.271062 containerd[1534]: 2025-01-30 12:57:09.268 [INFO][5477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.271526 containerd[1534]: time="2025-01-30T12:57:09.271110255Z" level=info msg="TearDown network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\" successfully" Jan 30 12:57:09.271526 containerd[1534]: time="2025-01-30T12:57:09.271138095Z" level=info msg="StopPodSandbox for \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\" returns successfully" Jan 30 12:57:09.271875 containerd[1534]: time="2025-01-30T12:57:09.271846621Z" level=info msg="RemovePodSandbox for \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\"" Jan 30 12:57:09.284797 containerd[1534]: time="2025-01-30T12:57:09.283678519Z" level=info msg="Forcibly stopping sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\"" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.327 [WARNING][5509] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c56171da-3422-46f6-bd03-661a74a240ac", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08a2d2fff27600531adfd222a0a94f9ba6f92133302cb207fb6236fd0bd6c2cb", Pod:"calico-apiserver-5b884f9b9b-jnhh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857f509b6e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.328 [INFO][5509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.328 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" iface="eth0" netns="" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.328 [INFO][5509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.328 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.361 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.361 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.361 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.372 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.372 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" HandleID="k8s-pod-network.abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--jnhh7-eth0" Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.376 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.381871 containerd[1534]: 2025-01-30 12:57:09.378 [INFO][5509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4" Jan 30 12:57:09.382412 containerd[1534]: time="2025-01-30T12:57:09.381908015Z" level=info msg="TearDown network for sandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\" successfully" Jan 30 12:57:09.391620 containerd[1534]: time="2025-01-30T12:57:09.391523695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:57:09.391754 containerd[1534]: time="2025-01-30T12:57:09.391637696Z" level=info msg="RemovePodSandbox \"abc2f3444d541322ca0f94bffa2412b97694adf8b9e3c8c6d756e3b1c4a397f4\" returns successfully" Jan 30 12:57:09.392692 containerd[1534]: time="2025-01-30T12:57:09.392659225Z" level=info msg="StopPodSandbox for \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\"" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.443 [WARNING][5539] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8a23043-6bab-4618-a927-3b2c52ff66a4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe", Pod:"calico-apiserver-5b884f9b9b-2wgjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali891f8a57a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.443 [INFO][5539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.443 [INFO][5539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" iface="eth0" netns="" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.443 [INFO][5539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.443 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.472 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.472 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.473 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.482 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.482 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.488 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.491515 containerd[1534]: 2025-01-30 12:57:09.489 [INFO][5539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.492632 containerd[1534]: time="2025-01-30T12:57:09.491560967Z" level=info msg="TearDown network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\" successfully" Jan 30 12:57:09.492632 containerd[1534]: time="2025-01-30T12:57:09.491587047Z" level=info msg="StopPodSandbox for \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\" returns successfully" Jan 30 12:57:09.492632 containerd[1534]: time="2025-01-30T12:57:09.492161012Z" level=info msg="RemovePodSandbox for \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\"" Jan 30 12:57:09.492632 containerd[1534]: time="2025-01-30T12:57:09.492195772Z" level=info msg="Forcibly stopping sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\"" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.544 [WARNING][5569] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0", GenerateName:"calico-apiserver-5b884f9b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8a23043-6bab-4618-a927-3b2c52ff66a4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b884f9b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38ffb1ce01aa3ef1210fc8d1cac9838781f8f7eb7142a4b56932d232ba2232fe", Pod:"calico-apiserver-5b884f9b9b-2wgjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali891f8a57a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.545 [INFO][5569] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.545 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" iface="eth0" netns="" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.545 [INFO][5569] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.545 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.566 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.566 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.566 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.575 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.575 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" HandleID="k8s-pod-network.3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Workload="localhost-k8s-calico--apiserver--5b884f9b9b--2wgjx-eth0" Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.577 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.580785 containerd[1534]: 2025-01-30 12:57:09.579 [INFO][5569] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552" Jan 30 12:57:09.581201 containerd[1534]: time="2025-01-30T12:57:09.580838669Z" level=info msg="TearDown network for sandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\" successfully" Jan 30 12:57:09.583898 containerd[1534]: time="2025-01-30T12:57:09.583848054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:57:09.583954 containerd[1534]: time="2025-01-30T12:57:09.583937975Z" level=info msg="RemovePodSandbox \"3c7ff0b718729ba3bfc4d7c060a6f013c7d5dca66c58984d45f5f81e2eef6552\" returns successfully" Jan 30 12:57:09.584435 containerd[1534]: time="2025-01-30T12:57:09.584388658Z" level=info msg="StopPodSandbox for \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\"" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.621 [WARNING][5600] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"215e9ede-a56b-419e-b3ac-485389adba02", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61", Pod:"coredns-7db6d8ff4d-mlbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e89bb5c62d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.621 [INFO][5600] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.622 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" iface="eth0" netns="" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.622 [INFO][5600] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.622 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.645 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.645 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.646 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.654 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.654 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.656 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.659749 containerd[1534]: 2025-01-30 12:57:09.658 [INFO][5600] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.660607 containerd[1534]: time="2025-01-30T12:57:09.659775445Z" level=info msg="TearDown network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\" successfully" Jan 30 12:57:09.660607 containerd[1534]: time="2025-01-30T12:57:09.659807645Z" level=info msg="StopPodSandbox for \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\" returns successfully" Jan 30 12:57:09.660607 containerd[1534]: time="2025-01-30T12:57:09.660559891Z" level=info msg="RemovePodSandbox for \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\"" Jan 30 12:57:09.660607 containerd[1534]: time="2025-01-30T12:57:09.660592692Z" level=info msg="Forcibly stopping sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\"" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.696 [WARNING][5630] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"215e9ede-a56b-419e-b3ac-485389adba02", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"184430abf7521ac22e30b3b66e0c7997cd8f28cf26d48cfec14441727977ef61", Pod:"coredns-7db6d8ff4d-mlbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e89bb5c62d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.696 [INFO][5630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.696 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" iface="eth0" netns="" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.696 [INFO][5630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.696 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.716 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.716 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.716 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.724 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.724 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" HandleID="k8s-pod-network.ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Workload="localhost-k8s-coredns--7db6d8ff4d--mlbrv-eth0" Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.726 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.729274 containerd[1534]: 2025-01-30 12:57:09.727 [INFO][5630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3" Jan 30 12:57:09.729689 containerd[1534]: time="2025-01-30T12:57:09.729314103Z" level=info msg="TearDown network for sandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\" successfully" Jan 30 12:57:09.732333 containerd[1534]: time="2025-01-30T12:57:09.732286287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:57:09.732419 containerd[1534]: time="2025-01-30T12:57:09.732369208Z" level=info msg="RemovePodSandbox \"ddcaf966e751097d6d6b82ed4f5a1c2cc8ef023eccb4a70b945d138f216ca9d3\" returns successfully" Jan 30 12:57:09.732982 containerd[1534]: time="2025-01-30T12:57:09.732935533Z" level=info msg="StopPodSandbox for \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\"" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.769 [WARNING][5660] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xwjj9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41953e49-b598-4079-bbdf-ef9c599ebe81", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818", Pod:"csi-node-driver-xwjj9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a66901c117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.769 [INFO][5660] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.769 [INFO][5660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" iface="eth0" netns="" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.769 [INFO][5660] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.769 [INFO][5660] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.788 [INFO][5668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.788 [INFO][5668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.788 [INFO][5668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.796 [WARNING][5668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.797 [INFO][5668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.798 [INFO][5668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.801880 containerd[1534]: 2025-01-30 12:57:09.800 [INFO][5660] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.801880 containerd[1534]: time="2025-01-30T12:57:09.801800585Z" level=info msg="TearDown network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\" successfully" Jan 30 12:57:09.801880 containerd[1534]: time="2025-01-30T12:57:09.801840346Z" level=info msg="StopPodSandbox for \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\" returns successfully" Jan 30 12:57:09.804516 containerd[1534]: time="2025-01-30T12:57:09.804475847Z" level=info msg="RemovePodSandbox for \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\"" Jan 30 12:57:09.804516 containerd[1534]: time="2025-01-30T12:57:09.804518688Z" level=info msg="Forcibly stopping sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\"" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.840 [WARNING][5690] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xwjj9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41953e49-b598-4079-bbdf-ef9c599ebe81", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0657c104475a469d810b21945b49e3d494467b53e8f435a6003bbe951bf1818", Pod:"csi-node-driver-xwjj9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a66901c117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.841 [INFO][5690] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.841 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" iface="eth0" netns="" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.841 [INFO][5690] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.841 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.863 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.863 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.863 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.871 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.871 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" HandleID="k8s-pod-network.4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Workload="localhost-k8s-csi--node--driver--xwjj9-eth0" Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.873 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.878065 containerd[1534]: 2025-01-30 12:57:09.875 [INFO][5690] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b" Jan 30 12:57:09.878065 containerd[1534]: time="2025-01-30T12:57:09.877162732Z" level=info msg="TearDown network for sandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\" successfully" Jan 30 12:57:09.880614 containerd[1534]: time="2025-01-30T12:57:09.880389798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:57:09.880614 containerd[1534]: time="2025-01-30T12:57:09.880476839Z" level=info msg="RemovePodSandbox \"4212960e0daa8a5549e70809266429b4de2e9b31629733c7b47fae946c92c51b\" returns successfully" Jan 30 12:57:09.881008 containerd[1534]: time="2025-01-30T12:57:09.880951923Z" level=info msg="StopPodSandbox for \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\"" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.920 [WARNING][5721] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf7d4427-4304-4d39-a328-49dbc3c64e9c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249", Pod:"coredns-7db6d8ff4d-jqtfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00547228d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.920 [INFO][5721] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.920 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" iface="eth0" netns="" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.920 [INFO][5721] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.920 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.945 [INFO][5729] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.945 [INFO][5729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.945 [INFO][5729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.955 [WARNING][5729] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.955 [INFO][5729] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.956 [INFO][5729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:09.959726 containerd[1534]: 2025-01-30 12:57:09.958 [INFO][5721] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:09.960927 containerd[1534]: time="2025-01-30T12:57:09.959782178Z" level=info msg="TearDown network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\" successfully" Jan 30 12:57:09.960927 containerd[1534]: time="2025-01-30T12:57:09.959807898Z" level=info msg="StopPodSandbox for \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\" returns successfully" Jan 30 12:57:09.960927 containerd[1534]: time="2025-01-30T12:57:09.960333263Z" level=info msg="RemovePodSandbox for \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\"" Jan 30 12:57:09.960927 containerd[1534]: time="2025-01-30T12:57:09.960363463Z" level=info msg="Forcibly stopping sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\"" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:09.997 [WARNING][5751] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf7d4427-4304-4d39-a328-49dbc3c64e9c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5762b64398c17a8b681da1fc5d9902a3aadbf86672d055d4518946c4c6093249", Pod:"coredns-7db6d8ff4d-jqtfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00547228d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:09.997 [INFO][5751] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:09.997 [INFO][5751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" iface="eth0" netns="" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:09.997 [INFO][5751] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:09.997 [INFO][5751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:10.018 [INFO][5758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:10.019 [INFO][5758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:10.019 [INFO][5758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:10.027 [WARNING][5758] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:10.027 [INFO][5758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" HandleID="k8s-pod-network.1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Workload="localhost-k8s-coredns--7db6d8ff4d--jqtfs-eth0" Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:10.029 [INFO][5758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:10.032513 containerd[1534]: 2025-01-30 12:57:10.031 [INFO][5751] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d" Jan 30 12:57:10.032924 containerd[1534]: time="2025-01-30T12:57:10.032640501Z" level=info msg="TearDown network for sandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\" successfully" Jan 30 12:57:10.049972 containerd[1534]: time="2025-01-30T12:57:10.049783642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:57:10.049972 containerd[1534]: time="2025-01-30T12:57:10.049858283Z" level=info msg="RemovePodSandbox \"1fc9f7fd1e417a7abc74a17ca49f1d08148aca4762789dc57fb9acec0df89d5d\" returns successfully" Jan 30 12:57:10.050544 containerd[1534]: time="2025-01-30T12:57:10.050516248Z" level=info msg="StopPodSandbox for \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\"" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.090 [WARNING][5781] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0", GenerateName:"calico-kube-controllers-69d5ff6878-", Namespace:"calico-system", SelfLink:"", UID:"f5238e11-c884-4750-8685-6bc2db2bcd69", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d5ff6878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92", Pod:"calico-kube-controllers-69d5ff6878-vn947", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69ddb6244a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.091 [INFO][5781] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.091 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" iface="eth0" netns="" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.091 [INFO][5781] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.091 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.112 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.112 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.112 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.121 [WARNING][5788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.121 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.123 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:10.127994 containerd[1534]: 2025-01-30 12:57:10.125 [INFO][5781] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.127994 containerd[1534]: time="2025-01-30T12:57:10.127791083Z" level=info msg="TearDown network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\" successfully" Jan 30 12:57:10.127994 containerd[1534]: time="2025-01-30T12:57:10.127817404Z" level=info msg="StopPodSandbox for \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\" returns successfully" Jan 30 12:57:10.129288 containerd[1534]: time="2025-01-30T12:57:10.128920053Z" level=info msg="RemovePodSandbox for \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\"" Jan 30 12:57:10.129288 containerd[1534]: time="2025-01-30T12:57:10.128959173Z" level=info msg="Forcibly stopping sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\"" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.168 [WARNING][5811] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0", GenerateName:"calico-kube-controllers-69d5ff6878-", Namespace:"calico-system", SelfLink:"", UID:"f5238e11-c884-4750-8685-6bc2db2bcd69", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d5ff6878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69ff388ded68ae689581c828f9abff91440cca80b8847f3bdce2ac716878bc92", Pod:"calico-kube-controllers-69d5ff6878-vn947", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69ddb6244a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.169 [INFO][5811] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.169 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" iface="eth0" netns="" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.169 [INFO][5811] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.169 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.191 [INFO][5818] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.191 [INFO][5818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.191 [INFO][5818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.200 [WARNING][5818] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.200 [INFO][5818] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" HandleID="k8s-pod-network.02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Workload="localhost-k8s-calico--kube--controllers--69d5ff6878--vn947-eth0" Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.202 [INFO][5818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:57:10.206817 containerd[1534]: 2025-01-30 12:57:10.204 [INFO][5811] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35" Jan 30 12:57:10.207540 containerd[1534]: time="2025-01-30T12:57:10.206884614Z" level=info msg="TearDown network for sandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\" successfully" Jan 30 12:57:10.211207 containerd[1534]: time="2025-01-30T12:57:10.211144449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:57:10.211350 containerd[1534]: time="2025-01-30T12:57:10.211256970Z" level=info msg="RemovePodSandbox \"02f09ccbfe7e833a0b28e71e9fb6d9d51e23b2fd026ca8188466074d63803d35\" returns successfully" Jan 30 12:57:10.620344 kubelet[2707]: E0130 12:57:10.620234 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:11.923943 kubelet[2707]: I0130 12:57:11.923340 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:57:13.757566 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:58542.service - OpenSSH per-connection server daemon (10.0.0.1:58542). Jan 30 12:57:13.801941 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 58542 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:13.802395 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:13.807984 systemd-logind[1515]: New session 16 of user core. Jan 30 12:57:13.814417 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 12:57:14.004091 sshd[5857]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:14.014376 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:58556.service - OpenSSH per-connection server daemon (10.0.0.1:58556). Jan 30 12:57:14.014840 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:58542.service: Deactivated successfully. Jan 30 12:57:14.019000 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Jan 30 12:57:14.020159 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 12:57:14.023402 systemd-logind[1515]: Removed session 16. Jan 30 12:57:14.056982 sshd[5869]: Accepted publickey for core from 10.0.0.1 port 58556 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:14.059484 sshd[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:14.065578 systemd-logind[1515]: New session 17 of user core. Jan 30 12:57:14.078389 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 12:57:14.320778 sshd[5869]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:14.328616 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:58564.service - OpenSSH per-connection server daemon (10.0.0.1:58564). Jan 30 12:57:14.334511 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:58556.service: Deactivated successfully. Jan 30 12:57:14.334575 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Jan 30 12:57:14.337007 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 12:57:14.339005 systemd-logind[1515]: Removed session 17. Jan 30 12:57:14.369585 sshd[5882]: Accepted publickey for core from 10.0.0.1 port 58564 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:14.371131 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:14.375821 systemd-logind[1515]: New session 18 of user core. Jan 30 12:57:14.384381 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 12:57:15.923670 sshd[5882]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:15.929359 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:58576.service - OpenSSH per-connection server daemon (10.0.0.1:58576). Jan 30 12:57:15.941645 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:58564.service: Deactivated successfully. Jan 30 12:57:15.952009 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 12:57:15.957139 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Jan 30 12:57:15.962261 systemd-logind[1515]: Removed session 18. Jan 30 12:57:15.975264 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 58576 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:15.976952 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:15.982127 systemd-logind[1515]: New session 19 of user core. Jan 30 12:57:15.989390 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 12:57:16.355988 sshd[5907]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:16.371382 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:58586.service - OpenSSH per-connection server daemon (10.0.0.1:58586). Jan 30 12:57:16.373104 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:58576.service: Deactivated successfully. Jan 30 12:57:16.378669 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Jan 30 12:57:16.380715 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 12:57:16.382624 systemd-logind[1515]: Removed session 19. Jan 30 12:57:16.429065 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 58586 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:16.430408 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:16.447131 systemd-logind[1515]: New session 20 of user core. Jan 30 12:57:16.456448 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 12:57:16.597768 sshd[5922]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:16.601513 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:58586.service: Deactivated successfully. Jan 30 12:57:16.604197 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Jan 30 12:57:16.604275 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 12:57:16.605665 systemd-logind[1515]: Removed session 20. Jan 30 12:57:20.805211 kubelet[2707]: I0130 12:57:20.805084 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:57:21.608354 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:58588.service - OpenSSH per-connection server daemon (10.0.0.1:58588). Jan 30 12:57:21.645215 sshd[5946]: Accepted publickey for core from 10.0.0.1 port 58588 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:21.646514 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:21.650726 systemd-logind[1515]: New session 21 of user core. Jan 30 12:57:21.657354 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 12:57:21.780956 sshd[5946]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:21.787890 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:58588.service: Deactivated successfully. Jan 30 12:57:21.792626 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 12:57:21.793828 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Jan 30 12:57:21.795018 systemd-logind[1515]: Removed session 21. Jan 30 12:57:24.192828 kubelet[2707]: E0130 12:57:24.192645 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:57:26.793956 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:48186.service - OpenSSH per-connection server daemon (10.0.0.1:48186). Jan 30 12:57:26.833812 sshd[5963]: Accepted publickey for core from 10.0.0.1 port 48186 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:26.835409 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.840175 systemd-logind[1515]: New session 22 of user core. Jan 30 12:57:26.851352 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 12:57:27.012364 sshd[5963]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:27.015875 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Jan 30 12:57:27.016736 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:48186.service: Deactivated successfully. Jan 30 12:57:27.018858 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 12:57:27.019906 systemd-logind[1515]: Removed session 22. Jan 30 12:57:32.027666 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:48194.service - OpenSSH per-connection server daemon (10.0.0.1:48194). Jan 30 12:57:32.064450 sshd[6004]: Accepted publickey for core from 10.0.0.1 port 48194 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:57:32.066047 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:32.073960 systemd-logind[1515]: New session 23 of user core. Jan 30 12:57:32.080513 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 12:57:32.243279 sshd[6004]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:32.246657 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:48194.service: Deactivated successfully. Jan 30 12:57:32.250874 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 12:57:32.251558 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Jan 30 12:57:32.252856 systemd-logind[1515]: Removed session 23.