Jan 17 12:19:13.927097 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 12:19:13.927118 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:19:13.927128 kernel: KASLR enabled Jan 17 12:19:13.927141 kernel: efi: EFI v2.7 by EDK II Jan 17 12:19:13.927155 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 17 12:19:13.927160 kernel: random: crng init done Jan 17 12:19:13.927168 kernel: ACPI: Early table checksum verification disabled Jan 17 12:19:13.927174 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 17 12:19:13.927180 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:19:13.927188 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927194 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927201 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927207 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927213 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927221 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927229 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927236 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927243 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:13.927249 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 17 12:19:13.927256 kernel: NUMA: Failed to initialise from firmware Jan 17 12:19:13.927263 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:19:13.927269 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 17 12:19:13.927284 kernel: Zone ranges: Jan 17 12:19:13.927291 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:19:13.927297 kernel: DMA32 empty Jan 17 12:19:13.927305 kernel: Normal empty Jan 17 12:19:13.927312 kernel: Movable zone start for each node Jan 17 12:19:13.927319 kernel: Early memory node ranges Jan 17 12:19:13.927325 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 17 12:19:13.927332 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 17 12:19:13.927338 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 17 12:19:13.927345 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 17 12:19:13.927352 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 17 12:19:13.927358 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 17 12:19:13.927365 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 17 12:19:13.927371 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:19:13.927378 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 17 12:19:13.927386 kernel: psci: probing for conduit method from ACPI. Jan 17 12:19:13.927392 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 12:19:13.927399 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:19:13.927409 kernel: psci: Trusted OS migration not required Jan 17 12:19:13.927416 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:19:13.927423 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 12:19:13.927432 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:19:13.927439 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:19:13.927446 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 17 12:19:13.927453 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:19:13.927461 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:19:13.927468 kernel: CPU features: detected: Hardware dirty bit management Jan 17 12:19:13.927475 kernel: CPU features: detected: Spectre-v4 Jan 17 12:19:13.927482 kernel: CPU features: detected: Spectre-BHB Jan 17 12:19:13.927489 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 12:19:13.927496 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 12:19:13.927504 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 12:19:13.927511 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 12:19:13.927518 kernel: alternatives: applying boot alternatives Jan 17 12:19:13.927526 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:19:13.927534 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:19:13.927541 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:19:13.927548 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:19:13.927555 kernel: Fallback order for Node 0: 0 Jan 17 12:19:13.927562 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 17 12:19:13.927569 kernel: Policy zone: DMA Jan 17 12:19:13.927576 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:19:13.927584 kernel: software IO TLB: area num 4. Jan 17 12:19:13.927592 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 17 12:19:13.927599 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 17 12:19:13.927606 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:19:13.927628 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:19:13.927636 kernel: rcu: RCU event tracing is enabled. Jan 17 12:19:13.927643 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:19:13.927650 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:19:13.927658 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:19:13.927665 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:19:13.927672 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:19:13.927679 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:19:13.927691 kernel: GICv3: 256 SPIs implemented Jan 17 12:19:13.927698 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:19:13.927705 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:19:13.927712 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 12:19:13.927720 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 12:19:13.927727 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 12:19:13.927734 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:19:13.927741 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:19:13.927748 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 17 12:19:13.927755 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 17 12:19:13.927762 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:19:13.927771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:19:13.927778 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 12:19:13.927786 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 12:19:13.927793 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 12:19:13.927800 kernel: arm-pv: using stolen time PV Jan 17 12:19:13.927807 kernel: Console: colour dummy device 80x25 Jan 17 12:19:13.927815 kernel: ACPI: Core revision 20230628 Jan 17 12:19:13.927822 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 12:19:13.927829 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:19:13.927837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:19:13.927845 kernel: landlock: Up and running. Jan 17 12:19:13.927852 kernel: SELinux: Initializing. Jan 17 12:19:13.927860 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:19:13.927867 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:19:13.927875 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:19:13.927882 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:19:13.927889 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:19:13.927897 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:19:13.927904 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 12:19:13.927913 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 12:19:13.927920 kernel: Remapping and enabling EFI services. Jan 17 12:19:13.927927 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:19:13.927935 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:19:13.927942 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 12:19:13.927950 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 17 12:19:13.927957 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:19:13.927964 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 12:19:13.927971 kernel: Detected PIPT I-cache on CPU2 Jan 17 12:19:13.927980 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 17 12:19:13.927987 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 17 12:19:13.927995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:19:13.928006 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 17 12:19:13.928015 kernel: Detected PIPT I-cache on CPU3 Jan 17 12:19:13.928023 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 17 12:19:13.928031 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 17 12:19:13.928038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:19:13.928046 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 17 12:19:13.928053 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:19:13.928062 kernel: SMP: Total of 4 processors activated. Jan 17 12:19:13.928070 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:19:13.928078 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 12:19:13.928085 kernel: CPU features: detected: Common not Private translations Jan 17 12:19:13.928093 kernel: CPU features: detected: CRC32 instructions Jan 17 12:19:13.928101 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 12:19:13.928108 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 12:19:13.928117 kernel: CPU features: detected: LSE atomic instructions Jan 17 12:19:13.928125 kernel: CPU features: detected: Privileged Access Never Jan 17 12:19:13.928133 kernel: CPU features: detected: RAS Extension Support Jan 17 12:19:13.928140 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 12:19:13.928148 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:19:13.928156 kernel: alternatives: applying system-wide alternatives Jan 17 12:19:13.928163 kernel: devtmpfs: initialized Jan 17 12:19:13.928171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:19:13.928179 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:19:13.928186 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:19:13.928195 kernel: SMBIOS 3.0.0 present. Jan 17 12:19:13.928203 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 17 12:19:13.928211 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:19:13.928218 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:19:13.928226 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:19:13.928234 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:19:13.928241 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:19:13.928249 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Jan 17 12:19:13.928257 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:19:13.928266 kernel: cpuidle: using governor menu Jan 17 12:19:13.928278 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:19:13.928286 kernel: ASID allocator initialised with 32768 entries Jan 17 12:19:13.928294 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:19:13.928302 kernel: Serial: AMBA PL011 UART driver Jan 17 12:19:13.928310 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 12:19:13.928317 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 12:19:13.928325 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:19:13.928333 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:19:13.928342 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:19:13.928349 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:19:13.928357 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:19:13.928365 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:19:13.928373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:19:13.928380 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:19:13.928388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:19:13.928395 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:19:13.928403 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:19:13.928412 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:19:13.928420 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:19:13.928427 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:19:13.928435 kernel: ACPI: Interpreter enabled Jan 17 12:19:13.928443 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:19:13.928450 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:19:13.928458 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 12:19:13.928466 kernel: printk: console [ttyAMA0] enabled Jan 17 12:19:13.928474 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:19:13.928601 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:19:13.928733 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:19:13.928803 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:19:13.928870 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 12:19:13.928936 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 12:19:13.928946 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 12:19:13.928954 kernel: PCI host bridge to bus 0000:00 Jan 17 12:19:13.929028 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 12:19:13.929091 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:19:13.929153 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 12:19:13.929213 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:19:13.929302 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 12:19:13.929385 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:19:13.929459 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 17 12:19:13.929529 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 17 12:19:13.929598 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 12:19:13.929678 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 12:19:13.929748 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 17 12:19:13.929820 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 17 12:19:13.929882 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 12:19:13.929946 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:19:13.930008 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 12:19:13.930018 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:19:13.930026 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:19:13.930034 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:19:13.930042 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:19:13.930049 kernel: iommu: Default domain type: Translated Jan 17 12:19:13.930057 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:19:13.930066 kernel: efivars: Registered efivars operations Jan 17 12:19:13.930074 kernel: vgaarb: loaded Jan 17 12:19:13.930082 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:19:13.930090 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:19:13.930097 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:19:13.930105 kernel: pnp: PnP ACPI init Jan 17 12:19:13.930182 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 12:19:13.930193 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:19:13.930201 kernel: NET: Registered PF_INET protocol family Jan 17 12:19:13.930210 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:19:13.930219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:19:13.930226 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:19:13.930234 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:19:13.930242 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:19:13.930250 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:19:13.930258 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:19:13.930265 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:19:13.930281 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:19:13.930289 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:19:13.930297 kernel: kvm [1]: HYP mode not available Jan 17 12:19:13.930305 kernel: Initialise system trusted keyrings Jan 17 12:19:13.930312 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:19:13.930320 kernel: Key type asymmetric registered Jan 17 12:19:13.930331 kernel: Asymmetric key parser 'x509' registered Jan 17 12:19:13.930339 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:19:13.930347 kernel: io scheduler mq-deadline registered Jan 17 12:19:13.930355 kernel: io scheduler kyber registered Jan 17 12:19:13.930365 kernel: io scheduler bfq registered Jan 17 12:19:13.930372 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:19:13.930380 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:19:13.930388 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:19:13.930462 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 17 12:19:13.930472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:19:13.930480 kernel: thunder_xcv, ver 1.0 Jan 17 12:19:13.930488 kernel: thunder_bgx, ver 1.0 Jan 17 12:19:13.930495 kernel: nicpf, ver 1.0 Jan 17 12:19:13.930505 kernel: nicvf, ver 1.0 Jan 17 12:19:13.930581 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:19:13.930699 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:19:13 UTC (1737116353) Jan 17 12:19:13.930711 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:19:13.930719 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 12:19:13.930727 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:19:13.930735 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:19:13.930742 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:19:13.930753 kernel: Segment Routing with IPv6 Jan 17 12:19:13.930760 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:19:13.930768 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:19:13.930776 kernel: Key type dns_resolver registered Jan 17 12:19:13.930784 kernel: registered taskstats version 1 Jan 17 12:19:13.930791 kernel: Loading compiled-in X.509 certificates Jan 17 12:19:13.930799 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:19:13.930807 kernel: Key type .fscrypt registered Jan 17 12:19:13.930814 kernel: Key type fscrypt-provisioning registered Jan 17 12:19:13.930824 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:19:13.930831 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:19:13.930839 kernel: ima: No architecture policies found Jan 17 12:19:13.930847 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:19:13.930855 kernel: clk: Disabling unused clocks Jan 17 12:19:13.930862 kernel: Freeing unused kernel memory: 39360K Jan 17 12:19:13.930870 kernel: Run /init as init process Jan 17 12:19:13.930878 kernel: with arguments: Jan 17 12:19:13.930885 kernel: /init Jan 17 12:19:13.930894 kernel: with environment: Jan 17 12:19:13.930901 kernel: HOME=/ Jan 17 12:19:13.930909 kernel: TERM=linux Jan 17 12:19:13.930916 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:19:13.930926 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:19:13.930936 systemd[1]: Detected virtualization kvm. Jan 17 12:19:13.930944 systemd[1]: Detected architecture arm64. Jan 17 12:19:13.930953 systemd[1]: Running in initrd. Jan 17 12:19:13.930961 systemd[1]: No hostname configured, using default hostname. Jan 17 12:19:13.930969 systemd[1]: Hostname set to . Jan 17 12:19:13.930978 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:19:13.930986 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:19:13.930995 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:13.931003 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:13.931012 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:19:13.931021 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:19:13.931030 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:19:13.931039 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:19:13.931048 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:19:13.931057 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:19:13.931065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:13.931073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:13.931083 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:19:13.931092 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:19:13.931100 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:19:13.931108 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:19:13.931116 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:19:13.931124 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:19:13.931133 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:19:13.931141 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:19:13.931149 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:13.931159 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:13.931167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:13.931175 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:19:13.931184 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:19:13.931192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:19:13.931200 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:19:13.931209 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:19:13.931217 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:19:13.931226 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:19:13.931234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:13.931243 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:19:13.931251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:13.931259 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:19:13.931268 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:19:13.931286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:13.931295 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:13.931321 systemd-journald[238]: Collecting audit messages is disabled. Jan 17 12:19:13.931343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:19:13.931352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:19:13.931361 systemd-journald[238]: Journal started Jan 17 12:19:13.931380 systemd-journald[238]: Runtime Journal (/run/log/journal/63ec6913e06443fe97dce0c13a6d3cae) is 5.9M, max 47.3M, 41.4M free. Jan 17 12:19:13.924099 systemd-modules-load[239]: Inserted module 'overlay' Jan 17 12:19:13.935224 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:19:13.941624 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:19:13.939242 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:19:13.940470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:13.945760 kernel: Bridge firewalling registered Jan 17 12:19:13.942699 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 17 12:19:13.947519 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:13.948894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:13.955778 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:13.956922 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:13.959549 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:19:13.963415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:13.965912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:19:13.971866 dracut-cmdline[277]: dracut-dracut-053 Jan 17 12:19:13.974147 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:19:13.995775 systemd-resolved[282]: Positive Trust Anchors: Jan 17 12:19:13.995794 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:19:13.995826 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:19:14.000450 systemd-resolved[282]: Defaulting to hostname 'linux'. Jan 17 12:19:14.003991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:19:14.005111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:14.036638 kernel: SCSI subsystem initialized Jan 17 12:19:14.041628 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:19:14.048636 kernel: iscsi: registered transport (tcp) Jan 17 12:19:14.063649 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:19:14.063702 kernel: QLogic iSCSI HBA Driver Jan 17 12:19:14.104796 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:19:14.115734 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:19:14.130871 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:19:14.130908 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:19:14.131942 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:19:14.180648 kernel: raid6: neonx8 gen() 15706 MB/s Jan 17 12:19:14.197626 kernel: raid6: neonx4 gen() 15607 MB/s Jan 17 12:19:14.214634 kernel: raid6: neonx2 gen() 13174 MB/s Jan 17 12:19:14.231636 kernel: raid6: neonx1 gen() 10437 MB/s Jan 17 12:19:14.248638 kernel: raid6: int64x8 gen() 6934 MB/s Jan 17 12:19:14.265636 kernel: raid6: int64x4 gen() 7333 MB/s Jan 17 12:19:14.282634 kernel: raid6: int64x2 gen() 6101 MB/s Jan 17 12:19:14.299707 kernel: raid6: int64x1 gen() 5052 MB/s Jan 17 12:19:14.299728 kernel: raid6: using algorithm neonx8 gen() 15706 MB/s Jan 17 12:19:14.317710 kernel: raid6: .... xor() 11932 MB/s, rmw enabled Jan 17 12:19:14.317748 kernel: raid6: using neon recovery algorithm Jan 17 12:19:14.322969 kernel: xor: measuring software checksum speed Jan 17 12:19:14.322986 kernel: 8regs : 19793 MB/sec Jan 17 12:19:14.323640 kernel: 32regs : 19679 MB/sec Jan 17 12:19:14.324865 kernel: arm64_neon : 26945 MB/sec Jan 17 12:19:14.324877 kernel: xor: using function: arm64_neon (26945 MB/sec) Jan 17 12:19:14.374638 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:19:14.384786 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:19:14.393808 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:14.405996 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 17 12:19:14.409054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:14.412192 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:19:14.426019 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 17 12:19:14.451569 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:19:14.459753 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:19:14.497673 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:14.507415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:19:14.519093 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:19:14.520668 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:19:14.524867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:14.527138 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:19:14.533787 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:19:14.546513 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 17 12:19:14.557937 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:19:14.558043 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:19:14.558062 kernel: GPT:9289727 != 19775487 Jan 17 12:19:14.558072 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:19:14.558082 kernel: GPT:9289727 != 19775487 Jan 17 12:19:14.558093 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:19:14.558103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:14.545870 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:19:14.559226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:19:14.559348 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:14.562628 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:14.563814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:14.563945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:14.566175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:14.579895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:14.582987 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (521) Jan 17 12:19:14.585642 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (520) Jan 17 12:19:14.594519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:19:14.595976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:14.604795 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:19:14.609437 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:19:14.613301 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:19:14.614492 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:19:14.628820 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:19:14.633785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:14.635547 disk-uuid[550]: Primary Header is updated. Jan 17 12:19:14.635547 disk-uuid[550]: Secondary Entries is updated. Jan 17 12:19:14.635547 disk-uuid[550]: Secondary Header is updated. Jan 17 12:19:14.641638 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:14.648647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:14.659077 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:15.653889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:15.653947 disk-uuid[551]: The operation has completed successfully. Jan 17 12:19:15.678898 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:19:15.678992 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:19:15.694745 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:19:15.697529 sh[572]: Success Jan 17 12:19:15.710748 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:19:15.745057 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:19:15.746868 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:19:15.748663 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:19:15.757767 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:19:15.757805 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:19:15.757816 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:19:15.759647 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:19:15.759663 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:19:15.763589 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:19:15.764928 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:19:15.776769 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:19:15.778457 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:19:15.786102 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:19:15.786142 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:19:15.786154 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:15.788656 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:15.795826 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:19:15.797642 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:19:15.804353 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:19:15.814795 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:19:15.877784 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:19:15.897770 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:19:15.917531 systemd-networkd[765]: lo: Link UP Jan 17 12:19:15.917544 systemd-networkd[765]: lo: Gained carrier Jan 17 12:19:15.918500 ignition[667]: Ignition 2.19.0 Jan 17 12:19:15.918232 systemd-networkd[765]: Enumeration completed Jan 17 12:19:15.918506 ignition[667]: Stage: fetch-offline Jan 17 12:19:15.918325 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:19:15.918539 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:15.918732 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:15.918547 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:15.918735 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:19:15.918713 ignition[667]: parsed url from cmdline: "" Jan 17 12:19:15.919505 systemd-networkd[765]: eth0: Link UP Jan 17 12:19:15.918716 ignition[667]: no config URL provided Jan 17 12:19:15.919508 systemd-networkd[765]: eth0: Gained carrier Jan 17 12:19:15.918720 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:19:15.919519 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:15.918727 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:19:15.920346 systemd[1]: Reached target network.target - Network. Jan 17 12:19:15.918748 ignition[667]: op(1): [started] loading QEMU firmware config module Jan 17 12:19:15.929651 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:19:15.918753 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:19:15.930336 ignition[667]: op(1): [finished] loading QEMU firmware config module Jan 17 12:19:15.975532 ignition[667]: parsing config with SHA512: 8587c99dc87e7ccce0981db41dd2bf07c4ec6b6b217cb60156eb0726a6862ba9935fe87e6172f05d1fb23db99cf6c137b909c97cd96fbb5c1312b16255eb21e4 Jan 17 12:19:15.979491 unknown[667]: fetched base config from "system" Jan 17 12:19:15.979501 unknown[667]: fetched user config from "qemu" Jan 17 12:19:15.979990 ignition[667]: fetch-offline: fetch-offline passed Jan 17 12:19:15.981927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:19:15.980063 ignition[667]: Ignition finished successfully Jan 17 12:19:15.983253 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:19:15.991748 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:19:16.002147 ignition[772]: Ignition 2.19.0 Jan 17 12:19:16.002156 ignition[772]: Stage: kargs Jan 17 12:19:16.002332 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:16.002341 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:16.003221 ignition[772]: kargs: kargs passed Jan 17 12:19:16.005579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:19:16.003265 ignition[772]: Ignition finished successfully Jan 17 12:19:16.017769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:19:16.027015 ignition[781]: Ignition 2.19.0 Jan 17 12:19:16.027026 ignition[781]: Stage: disks Jan 17 12:19:16.027186 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:16.029893 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:19:16.027195 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:16.031190 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:19:16.028070 ignition[781]: disks: disks passed Jan 17 12:19:16.032821 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:19:16.028113 ignition[781]: Ignition finished successfully Jan 17 12:19:16.034884 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:19:16.036690 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:19:16.038189 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:19:16.050755 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:19:16.060618 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:19:16.064572 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:19:16.070732 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:19:16.111632 kernel: EXT4-fs (vda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:19:16.112060 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:19:16.113268 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:19:16.124682 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:19:16.126925 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:19:16.127935 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:19:16.127973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:19:16.127995 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:19:16.132220 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:19:16.139398 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Jan 17 12:19:16.139422 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:19:16.139433 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:19:16.135128 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:19:16.142179 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:16.143628 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:16.144952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:19:16.176590 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:19:16.181116 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:19:16.185111 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:19:16.188355 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:19:16.257590 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:19:16.269760 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:19:16.272232 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:19:16.277625 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:19:16.289862 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:19:16.293827 ignition[916]: INFO : Ignition 2.19.0 Jan 17 12:19:16.293827 ignition[916]: INFO : Stage: mount Jan 17 12:19:16.295358 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:16.295358 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:16.295358 ignition[916]: INFO : mount: mount passed Jan 17 12:19:16.295358 ignition[916]: INFO : Ignition finished successfully Jan 17 12:19:16.296925 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:19:16.308718 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:19:16.756646 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:19:16.765794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:19:16.772228 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Jan 17 12:19:16.772259 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:19:16.772275 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:19:16.773143 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:16.776650 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:16.777083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:19:16.792917 ignition[945]: INFO : Ignition 2.19.0 Jan 17 12:19:16.792917 ignition[945]: INFO : Stage: files Jan 17 12:19:16.794512 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:16.794512 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:16.794512 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:19:16.798073 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:19:16.798073 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:19:16.798073 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:19:16.798073 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:19:16.798073 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:19:16.797218 unknown[945]: wrote ssh authorized keys file for user: core Jan 17 12:19:16.805412 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:19:16.805412 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:19:16.860603 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:19:17.066394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:19:17.066394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:19:17.070365 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 17 12:19:17.364931 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:19:17.591018 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:19:17.591018 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 12:19:17.594638 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:19:17.621997 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:19:17.622857 systemd-networkd[765]: eth0: Gained IPv6LL Jan 17 12:19:17.626008 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:19:17.627687 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:19:17.627687 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:19:17.627687 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:19:17.627687 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:19:17.627687 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:19:17.627687 ignition[945]: INFO : files: files passed Jan 17 12:19:17.627687 ignition[945]: INFO : Ignition finished successfully Jan 17 12:19:17.628163 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:19:17.636802 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:19:17.639315 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:19:17.642955 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:19:17.644023 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:19:17.647273 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:19:17.650354 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:17.650354 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:17.653811 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:17.655348 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:19:17.656953 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:19:17.663787 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:19:17.682159 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:19:17.682285 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:19:17.684597 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:19:17.686633 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:19:17.688536 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:19:17.700795 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:19:17.713671 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:19:17.724757 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:19:17.732501 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:17.733798 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:17.735826 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:19:17.737568 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:19:17.737717 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:19:17.740240 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:19:17.742302 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:19:17.743980 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:19:17.745755 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:19:17.747742 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:19:17.749791 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:19:17.751678 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:19:17.753700 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:19:17.755683 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:19:17.757424 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:19:17.758987 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:19:17.759119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:19:17.761474 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:17.762656 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:17.764644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:19:17.768682 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:17.769976 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:19:17.770104 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:19:17.772860 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:19:17.772980 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:19:17.774982 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:19:17.776549 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:19:17.778399 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:17.779743 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:19:17.781668 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:19:17.783862 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:19:17.783998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:19:17.785500 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:19:17.785642 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:19:17.787196 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:19:17.787365 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:19:17.788946 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:19:17.789096 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:19:17.797831 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:19:17.798737 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:19:17.798929 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:17.801708 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:19:17.803515 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:19:17.803725 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:17.805621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:19:17.809304 ignition[1002]: INFO : Ignition 2.19.0 Jan 17 12:19:17.809304 ignition[1002]: INFO : Stage: umount Jan 17 12:19:17.809304 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:17.809304 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:17.805765 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:19:17.816509 ignition[1002]: INFO : umount: umount passed Jan 17 12:19:17.816509 ignition[1002]: INFO : Ignition finished successfully Jan 17 12:19:17.811602 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:19:17.811724 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:19:17.813166 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:19:17.813238 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:19:17.817940 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:19:17.818324 systemd[1]: Stopped target network.target - Network. Jan 17 12:19:17.819513 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:19:17.819574 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:19:17.821416 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:19:17.821471 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:19:17.823142 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:19:17.823187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:19:17.825766 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:19:17.825816 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:19:17.827081 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:19:17.830140 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:19:17.836734 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:19:17.836856 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:19:17.837675 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 17 12:19:17.840767 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:19:17.840865 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:19:17.843648 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:19:17.843704 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:17.854777 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:19:17.855663 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:19:17.855729 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:19:17.857778 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:19:17.857823 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:17.859578 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:19:17.859638 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:17.861875 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:19:17.861919 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:17.863920 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:17.872752 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:19:17.872867 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:19:17.874906 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:19:17.874988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:19:17.876875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:19:17.876960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:19:17.881219 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:19:17.881362 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:17.883596 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:19:17.883650 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:17.885457 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:19:17.885490 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:17.887345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:19:17.887396 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:19:17.890206 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:19:17.890254 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:19:17.893046 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:19:17.893090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:17.903769 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:19:17.904810 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:19:17.904874 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:17.906974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:17.907021 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:17.909185 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:19:17.909264 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:19:17.911456 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:19:17.913731 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:19:17.923010 systemd[1]: Switching root. Jan 17 12:19:17.950729 systemd-journald[238]: Journal stopped Jan 17 12:19:18.650681 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 17 12:19:18.650732 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:19:18.650747 kernel: SELinux: policy capability open_perms=1 Jan 17 12:19:18.650761 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:19:18.650774 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:19:18.650784 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:19:18.650793 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:19:18.650803 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:19:18.650813 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:19:18.650823 kernel: audit: type=1403 audit(1737116358.089:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:19:18.650836 systemd[1]: Successfully loaded SELinux policy in 31.565ms. Jan 17 12:19:18.650850 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.423ms. Jan 17 12:19:18.650862 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:19:18.650875 systemd[1]: Detected virtualization kvm. Jan 17 12:19:18.650886 systemd[1]: Detected architecture arm64. Jan 17 12:19:18.650896 systemd[1]: Detected first boot. Jan 17 12:19:18.650907 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:19:18.650918 zram_generator::config[1046]: No configuration found. Jan 17 12:19:18.650929 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:19:18.650939 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:19:18.650949 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:19:18.650962 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:19:18.650973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:19:18.650985 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:19:18.650996 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:19:18.651006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:19:18.651020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:19:18.651031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:19:18.651042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:19:18.651052 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:19:18.651064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:18.651075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:18.651086 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:19:18.651096 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:19:18.651107 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:19:18.651118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:19:18.651129 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 12:19:18.651139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:18.651150 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:19:18.651162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:19:18.651172 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:19:18.651183 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:19:18.651194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:18.651205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:19:18.651218 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:19:18.651228 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:19:18.651240 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:19:18.651251 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:19:18.651261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:18.651278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:18.651290 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:18.651301 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:19:18.651312 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:19:18.651322 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:19:18.651333 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:19:18.651346 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:19:18.651357 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:19:18.651367 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:19:18.651378 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:19:18.651389 systemd[1]: Reached target machines.target - Containers. Jan 17 12:19:18.651399 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:19:18.651409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:18.651420 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:19:18.651431 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:19:18.651443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:18.651455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:19:18.651466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:18.651477 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:19:18.651488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:18.651499 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:19:18.651509 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:19:18.651532 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:19:18.651545 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:19:18.651557 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:19:18.651567 kernel: fuse: init (API version 7.39) Jan 17 12:19:18.651577 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:19:18.651588 kernel: ACPI: bus type drm_connector registered Jan 17 12:19:18.651598 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:19:18.651615 kernel: loop: module loaded Jan 17 12:19:18.651627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:19:18.651637 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:19:18.651650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:19:18.651662 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:19:18.651673 systemd[1]: Stopped verity-setup.service. Jan 17 12:19:18.651701 systemd-journald[1117]: Collecting audit messages is disabled. Jan 17 12:19:18.651723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:19:18.651736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:19:18.651752 systemd-journald[1117]: Journal started Jan 17 12:19:18.651774 systemd-journald[1117]: Runtime Journal (/run/log/journal/63ec6913e06443fe97dce0c13a6d3cae) is 5.9M, max 47.3M, 41.4M free. Jan 17 12:19:18.444092 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:19:18.464948 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:19:18.465297 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:19:18.655718 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:19:18.656321 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:19:18.657515 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:19:18.658771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:19:18.660032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:19:18.662644 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:19:18.664056 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:18.665607 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:19:18.665768 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:19:18.667222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:18.667371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:18.668851 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:19:18.668982 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:19:18.671943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:18.672080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:18.673722 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:19:18.673847 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:19:18.675140 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:18.675284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:18.676801 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:18.679018 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:19:18.680513 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:19:18.691961 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:19:18.699733 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:19:18.701901 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:19:18.703034 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:19:18.703076 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:19:18.705057 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:19:18.707339 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:19:18.709546 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:19:18.710713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:18.712039 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:19:18.714795 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:19:18.716120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:18.717768 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:19:18.719064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:18.720805 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:18.722089 systemd-journald[1117]: Time spent on flushing to /var/log/journal/63ec6913e06443fe97dce0c13a6d3cae is 23.486ms for 853 entries. Jan 17 12:19:18.722089 systemd-journald[1117]: System Journal (/var/log/journal/63ec6913e06443fe97dce0c13a6d3cae) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:19:18.757646 systemd-journald[1117]: Received client request to flush runtime journal. Jan 17 12:19:18.757703 kernel: loop0: detected capacity change from 0 to 114432 Jan 17 12:19:18.727821 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:19:18.731476 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:19:18.736196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:18.737806 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:19:18.739200 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:19:18.740960 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:19:18.742595 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:19:18.746443 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:19:18.758864 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:19:18.764826 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:19:18.767899 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:19:18.769534 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:18.773841 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:19:18.776158 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:19:18.778674 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:19:18.792476 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:19:18.806648 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 12:19:18.812827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:19:18.819474 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:19:18.836403 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 17 12:19:18.836421 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 17 12:19:18.842640 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:18.852296 kernel: loop2: detected capacity change from 0 to 194096 Jan 17 12:19:18.882651 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 12:19:18.887640 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 12:19:18.891638 kernel: loop5: detected capacity change from 0 to 194096 Jan 17 12:19:18.895784 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:19:18.896156 (sd-merge)[1183]: Merged extensions into '/usr'. Jan 17 12:19:18.901739 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:19:18.901844 systemd[1]: Reloading... Jan 17 12:19:18.964645 zram_generator::config[1207]: No configuration found. Jan 17 12:19:18.986935 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:19:19.048208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:19.083730 systemd[1]: Reloading finished in 181 ms. Jan 17 12:19:19.119648 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:19:19.121072 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:19:19.139843 systemd[1]: Starting ensure-sysext.service... Jan 17 12:19:19.142189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:19:19.152091 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:19:19.152104 systemd[1]: Reloading... Jan 17 12:19:19.160470 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:19:19.160756 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:19:19.161397 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:19:19.161661 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 17 12:19:19.161715 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 17 12:19:19.163801 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:19:19.163815 systemd-tmpfiles[1245]: Skipping /boot Jan 17 12:19:19.170771 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:19:19.170787 systemd-tmpfiles[1245]: Skipping /boot Jan 17 12:19:19.201645 zram_generator::config[1276]: No configuration found. Jan 17 12:19:19.278993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:19.314325 systemd[1]: Reloading finished in 161 ms. Jan 17 12:19:19.332685 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:19:19.342073 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:19.349318 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:19.351995 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:19:19.354253 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:19:19.357864 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:19:19.362983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:19.371429 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:19:19.375938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:19.377199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:19.383425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:19.387982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:19.391194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:19.392900 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:19:19.394682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:19.394818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:19.395476 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jan 17 12:19:19.396538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:19.398649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:19.400448 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:19.400570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:19.402466 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:19:19.408906 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:19.409125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:19.415956 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:19:19.417493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:19.423653 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:19:19.425277 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:19:19.432979 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:19:19.438788 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:19:19.444403 systemd[1]: Finished ensure-sysext.service. Jan 17 12:19:19.453251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:19.463010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:19.463639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1347) Jan 17 12:19:19.466318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:19:19.470236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:19.474666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:19.475776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:19.479282 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:19:19.485252 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:19:19.488184 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:19:19.488680 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:19:19.488833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:19:19.492740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:19.492876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:19.494349 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:19.494478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:19.505537 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 12:19:19.505820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:19.505960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:19.508550 augenrules[1373]: No rules Jan 17 12:19:19.510421 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:19.515452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:19:19.528924 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:19:19.530087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:19.530147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:19.548980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:19:19.565328 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:19:19.567350 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:19:19.572312 systemd-resolved[1312]: Positive Trust Anchors: Jan 17 12:19:19.572568 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:19:19.572665 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:19:19.572875 systemd-networkd[1375]: lo: Link UP Jan 17 12:19:19.572882 systemd-networkd[1375]: lo: Gained carrier Jan 17 12:19:19.573599 systemd-networkd[1375]: Enumeration completed Jan 17 12:19:19.573709 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:19:19.574336 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:19.574340 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:19:19.575009 systemd-networkd[1375]: eth0: Link UP Jan 17 12:19:19.575012 systemd-networkd[1375]: eth0: Gained carrier Jan 17 12:19:19.575024 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:19.581756 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jan 17 12:19:19.581809 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:19:19.594729 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:19:19.596934 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Jan 17 12:19:20.029623 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:19:20.029670 systemd-timesyncd[1376]: Initial clock synchronization to Fri 2025-01-17 12:19:20.029533 UTC. Jan 17 12:19:20.030423 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:19:20.032003 systemd-resolved[1312]: Clock change detected. Flushing caches. Jan 17 12:19:20.032417 systemd[1]: Reached target network.target - Network. Jan 17 12:19:20.033376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:20.046000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:20.047402 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:19:20.052038 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:19:20.069037 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:19:20.091929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:20.109894 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:19:20.111827 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:20.113075 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:19:20.114402 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:19:20.115654 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:19:20.117128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:19:20.118304 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:19:20.119549 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:19:20.120813 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:19:20.120862 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:19:20.121760 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:19:20.123251 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:19:20.125644 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:19:20.137603 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:19:20.139786 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:19:20.141429 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:19:20.142591 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:19:20.143543 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:19:20.144508 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:19:20.144543 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:19:20.145470 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:19:20.147884 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:19:20.147461 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:19:20.151019 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:19:20.154000 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:19:20.155120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:19:20.157743 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:19:20.161118 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:19:20.162313 jq[1410]: false Jan 17 12:19:20.165722 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:19:20.171020 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:19:20.178741 extend-filesystems[1411]: Found loop3 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found loop4 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found loop5 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda1 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda2 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda3 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found usr Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda4 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda6 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda7 Jan 17 12:19:20.178741 extend-filesystems[1411]: Found vda9 Jan 17 12:19:20.178741 extend-filesystems[1411]: Checking size of /dev/vda9 Jan 17 12:19:20.179832 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:19:20.185904 dbus-daemon[1409]: [system] SELinux support is enabled Jan 17 12:19:20.184976 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:19:20.185484 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:19:20.186433 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:19:20.191031 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:19:20.195558 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:19:20.199882 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:19:20.218305 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1343) Jan 17 12:19:20.219346 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:19:20.219546 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:19:20.219828 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:19:20.219983 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:19:20.221545 extend-filesystems[1411]: Resized partition /dev/vda9 Jan 17 12:19:20.223246 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:19:20.223405 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:19:20.230867 jq[1426]: true Jan 17 12:19:20.231129 update_engine[1424]: I20250117 12:19:20.230799 1424 main.cc:92] Flatcar Update Engine starting Jan 17 12:19:20.236227 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:19:20.242869 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:19:20.246397 jq[1439]: true Jan 17 12:19:20.248452 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:19:20.251651 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:19:20.251685 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:19:20.252071 systemd-logind[1420]: New seat seat0. Jan 17 12:19:20.254991 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:19:20.255014 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:19:20.255279 update_engine[1424]: I20250117 12:19:20.255187 1424 update_check_scheduler.cc:74] Next update check in 10m20s Jan 17 12:19:20.255689 tar[1434]: linux-arm64/helm Jan 17 12:19:20.264028 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:19:20.266567 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:19:20.267984 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:19:20.269889 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:19:20.272452 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:19:20.296367 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:19:20.296367 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:19:20.296367 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:19:20.300542 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Jan 17 12:19:20.300833 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:19:20.301021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:19:20.318126 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:19:20.323065 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:19:20.325810 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:19:20.331425 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:19:20.487150 containerd[1436]: time="2025-01-17T12:19:20.487023473Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:19:20.522477 containerd[1436]: time="2025-01-17T12:19:20.522388553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:20.523997 containerd[1436]: time="2025-01-17T12:19:20.523840993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:20.523997 containerd[1436]: time="2025-01-17T12:19:20.523988793Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:19:20.524086 containerd[1436]: time="2025-01-17T12:19:20.524006033Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:19:20.524180 containerd[1436]: time="2025-01-17T12:19:20.524159513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:19:20.524204 containerd[1436]: time="2025-01-17T12:19:20.524185993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:20.524255 containerd[1436]: time="2025-01-17T12:19:20.524239833Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:20.524275 containerd[1436]: time="2025-01-17T12:19:20.524255473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:20.524632 containerd[1436]: time="2025-01-17T12:19:20.524596913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:20.524681 containerd[1436]: time="2025-01-17T12:19:20.524665273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:20.524702 containerd[1436]: time="2025-01-17T12:19:20.524687873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:20.524702 containerd[1436]: time="2025-01-17T12:19:20.524698353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:20.524924 containerd[1436]: time="2025-01-17T12:19:20.524902793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:20.525151 containerd[1436]: time="2025-01-17T12:19:20.525125193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:20.525423 containerd[1436]: time="2025-01-17T12:19:20.525400433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:20.525452 containerd[1436]: time="2025-01-17T12:19:20.525428313Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:19:20.525592 containerd[1436]: time="2025-01-17T12:19:20.525572953Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:19:20.525700 containerd[1436]: time="2025-01-17T12:19:20.525677833Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:19:20.529388 containerd[1436]: time="2025-01-17T12:19:20.529211033Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:19:20.529445 containerd[1436]: time="2025-01-17T12:19:20.529407473Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:19:20.529445 containerd[1436]: time="2025-01-17T12:19:20.529433033Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:19:20.529505 containerd[1436]: time="2025-01-17T12:19:20.529453753Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:19:20.529505 containerd[1436]: time="2025-01-17T12:19:20.529473913Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:19:20.529818 containerd[1436]: time="2025-01-17T12:19:20.529779513Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:19:20.530285 containerd[1436]: time="2025-01-17T12:19:20.530249633Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:19:20.530494 containerd[1436]: time="2025-01-17T12:19:20.530441633Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:19:20.530592 containerd[1436]: time="2025-01-17T12:19:20.530539113Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:19:20.530680 containerd[1436]: time="2025-01-17T12:19:20.530578793Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:19:20.530819 containerd[1436]: time="2025-01-17T12:19:20.530740633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.530819 containerd[1436]: time="2025-01-17T12:19:20.530769273Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.531080 containerd[1436]: time="2025-01-17T12:19:20.531005993Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.531080 containerd[1436]: time="2025-01-17T12:19:20.531049113Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.531379 containerd[1436]: time="2025-01-17T12:19:20.531070833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.531379 containerd[1436]: time="2025-01-17T12:19:20.531352633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.531379 containerd[1436]: time="2025-01-17T12:19:20.531367233Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.531379 containerd[1436]: time="2025-01-17T12:19:20.531378633Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:19:20.531481 containerd[1436]: time="2025-01-17T12:19:20.531406433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531481 containerd[1436]: time="2025-01-17T12:19:20.531421513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531481 containerd[1436]: time="2025-01-17T12:19:20.531434593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531481 containerd[1436]: time="2025-01-17T12:19:20.531447673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531481 containerd[1436]: time="2025-01-17T12:19:20.531460033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531481 containerd[1436]: time="2025-01-17T12:19:20.531473993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531601 containerd[1436]: time="2025-01-17T12:19:20.531494633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531601 containerd[1436]: time="2025-01-17T12:19:20.531522433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531601 containerd[1436]: time="2025-01-17T12:19:20.531536913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531601 containerd[1436]: time="2025-01-17T12:19:20.531551553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531601 containerd[1436]: time="2025-01-17T12:19:20.531567073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531601 containerd[1436]: time="2025-01-17T12:19:20.531578993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531601 containerd[1436]: time="2025-01-17T12:19:20.531590633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531718 containerd[1436]: time="2025-01-17T12:19:20.531605873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:19:20.531718 containerd[1436]: time="2025-01-17T12:19:20.531627713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531718 containerd[1436]: time="2025-01-17T12:19:20.531641233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531718 containerd[1436]: time="2025-01-17T12:19:20.531651713Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:19:20.531789 containerd[1436]: time="2025-01-17T12:19:20.531754633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:19:20.531789 containerd[1436]: time="2025-01-17T12:19:20.531770673Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:19:20.531789 containerd[1436]: time="2025-01-17T12:19:20.531780393Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:19:20.531857 containerd[1436]: time="2025-01-17T12:19:20.531791033Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:19:20.531857 containerd[1436]: time="2025-01-17T12:19:20.531800313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.531857 containerd[1436]: time="2025-01-17T12:19:20.531812313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:19:20.531857 containerd[1436]: time="2025-01-17T12:19:20.531821833Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:19:20.531857 containerd[1436]: time="2025-01-17T12:19:20.531832033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:19:20.532545 containerd[1436]: time="2025-01-17T12:19:20.532461313Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:19:20.532545 containerd[1436]: time="2025-01-17T12:19:20.532541073Z" level=info msg="Connect containerd service" Jan 17 12:19:20.532691 containerd[1436]: time="2025-01-17T12:19:20.532575273Z" level=info msg="using legacy CRI server" Jan 17 12:19:20.532691 containerd[1436]: time="2025-01-17T12:19:20.532583953Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:19:20.532759 containerd[1436]: time="2025-01-17T12:19:20.532686953Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:19:20.533604 containerd[1436]: time="2025-01-17T12:19:20.533577993Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:19:20.533893 containerd[1436]: time="2025-01-17T12:19:20.533860353Z" level=info msg="Start subscribing containerd event" Jan 17 12:19:20.534033 containerd[1436]: time="2025-01-17T12:19:20.533953393Z" level=info msg="Start recovering state" Jan 17 12:19:20.534110 containerd[1436]: time="2025-01-17T12:19:20.534095433Z" level=info msg="Start event monitor" Jan 17 12:19:20.534234 containerd[1436]: time="2025-01-17T12:19:20.534179593Z" level=info msg="Start snapshots syncer" Jan 17 12:19:20.534479 containerd[1436]: time="2025-01-17T12:19:20.534271873Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:19:20.534479 containerd[1436]: time="2025-01-17T12:19:20.534286513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:19:20.534479 containerd[1436]: time="2025-01-17T12:19:20.534402513Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:19:20.534584 containerd[1436]: time="2025-01-17T12:19:20.534569633Z" level=info msg="Start streaming server" Jan 17 12:19:20.536193 containerd[1436]: time="2025-01-17T12:19:20.534714033Z" level=info msg="containerd successfully booted in 0.048738s" Jan 17 12:19:20.534828 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:19:20.611193 tar[1434]: linux-arm64/LICENSE Jan 17 12:19:20.611295 tar[1434]: linux-arm64/README.md Jan 17 12:19:20.623887 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:19:20.700129 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:19:20.719375 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:19:20.730165 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:19:20.735588 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:19:20.735787 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:19:20.740104 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:19:20.751105 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:19:20.754659 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:19:20.757482 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 12:19:20.759001 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:19:21.060041 systemd-networkd[1375]: eth0: Gained IPv6LL Jan 17 12:19:21.062592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:19:21.065374 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:19:21.075095 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:19:21.077465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:21.079630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:19:21.093887 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:19:21.094782 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:19:21.098016 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:19:21.103834 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:19:21.621666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:21.623213 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:19:21.626551 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:21.629426 systemd[1]: Startup finished in 574ms (kernel) + 4.369s (initrd) + 3.144s (userspace) = 8.089s. Jan 17 12:19:22.108225 kubelet[1521]: E0117 12:19:22.108116 1521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:22.110929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:22.111073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:27.002467 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:19:27.003562 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:57424.service - OpenSSH per-connection server daemon (10.0.0.1:57424). Jan 17 12:19:27.058536 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 57424 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:19:27.060274 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.069816 systemd-logind[1420]: New session 1 of user core. Jan 17 12:19:27.070719 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:19:27.078063 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:19:27.087874 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:19:27.089765 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:19:27.095634 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:19:27.163168 systemd[1540]: Queued start job for default target default.target. Jan 17 12:19:27.170632 systemd[1540]: Created slice app.slice - User Application Slice. Jan 17 12:19:27.170672 systemd[1540]: Reached target paths.target - Paths. Jan 17 12:19:27.170684 systemd[1540]: Reached target timers.target - Timers. Jan 17 12:19:27.171754 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:19:27.180207 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:19:27.180262 systemd[1540]: Reached target sockets.target - Sockets. Jan 17 12:19:27.180274 systemd[1540]: Reached target basic.target - Basic System. Jan 17 12:19:27.180306 systemd[1540]: Reached target default.target - Main User Target. Jan 17 12:19:27.180330 systemd[1540]: Startup finished in 80ms. Jan 17 12:19:27.180595 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:19:27.181783 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:19:27.242785 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:57432.service - OpenSSH per-connection server daemon (10.0.0.1:57432). Jan 17 12:19:27.275622 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 57432 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:19:27.276706 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.280704 systemd-logind[1420]: New session 2 of user core. Jan 17 12:19:27.289965 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:19:27.341150 sshd[1551]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:27.355038 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:57432.service: Deactivated successfully. Jan 17 12:19:27.356273 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:19:27.358842 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:19:27.359813 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:57438.service - OpenSSH per-connection server daemon (10.0.0.1:57438). Jan 17 12:19:27.360490 systemd-logind[1420]: Removed session 2. Jan 17 12:19:27.391787 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 57438 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:19:27.392808 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.396188 systemd-logind[1420]: New session 3 of user core. Jan 17 12:19:27.404981 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:19:27.451978 sshd[1558]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:27.459935 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:57438.service: Deactivated successfully. Jan 17 12:19:27.461097 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:19:27.463126 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:19:27.464378 systemd-logind[1420]: Removed session 3. Jan 17 12:19:27.465832 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:57454.service - OpenSSH per-connection server daemon (10.0.0.1:57454). Jan 17 12:19:27.497714 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 57454 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:19:27.498775 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.501892 systemd-logind[1420]: New session 4 of user core. Jan 17 12:19:27.511970 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:19:27.561870 sshd[1565]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:27.579011 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:57454.service: Deactivated successfully. Jan 17 12:19:27.580289 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:19:27.582826 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:19:27.583838 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:57460.service - OpenSSH per-connection server daemon (10.0.0.1:57460). Jan 17 12:19:27.584618 systemd-logind[1420]: Removed session 4. Jan 17 12:19:27.615864 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 57460 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:19:27.616971 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.619840 systemd-logind[1420]: New session 5 of user core. Jan 17 12:19:27.629961 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:19:27.685297 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:19:27.685565 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:27.698631 sudo[1575]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:27.700222 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:27.709026 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:57460.service: Deactivated successfully. Jan 17 12:19:27.710339 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:19:27.711536 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:19:27.712623 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:57470.service - OpenSSH per-connection server daemon (10.0.0.1:57470). Jan 17 12:19:27.713386 systemd-logind[1420]: Removed session 5. Jan 17 12:19:27.745104 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 57470 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:19:27.746242 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.749235 systemd-logind[1420]: New session 6 of user core. Jan 17 12:19:27.761960 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:19:27.811301 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:19:27.811568 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:27.814251 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:27.818609 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:19:27.818880 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:27.833110 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:27.834158 auditctl[1587]: No rules Jan 17 12:19:27.834927 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:19:27.835129 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:27.836585 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:27.857899 augenrules[1605]: No rules Jan 17 12:19:27.859925 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:27.860796 sudo[1583]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:27.862112 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:27.872103 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:57470.service: Deactivated successfully. Jan 17 12:19:27.873303 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:19:27.874473 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:19:27.875448 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:57474.service - OpenSSH per-connection server daemon (10.0.0.1:57474). Jan 17 12:19:27.876099 systemd-logind[1420]: Removed session 6. Jan 17 12:19:27.907995 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 57474 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:19:27.909109 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.912516 systemd-logind[1420]: New session 7 of user core. Jan 17 12:19:27.924963 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:19:27.973979 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:19:27.974230 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:28.273051 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:19:28.273279 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:19:28.525763 dockerd[1635]: time="2025-01-17T12:19:28.525381353Z" level=info msg="Starting up" Jan 17 12:19:28.638815 dockerd[1635]: time="2025-01-17T12:19:28.638767193Z" level=info msg="Loading containers: start." Jan 17 12:19:28.718873 kernel: Initializing XFRM netlink socket Jan 17 12:19:28.782698 systemd-networkd[1375]: docker0: Link UP Jan 17 12:19:28.803984 dockerd[1635]: time="2025-01-17T12:19:28.803953513Z" level=info msg="Loading containers: done." Jan 17 12:19:28.814433 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1550902849-merged.mount: Deactivated successfully. Jan 17 12:19:28.815544 dockerd[1635]: time="2025-01-17T12:19:28.815497953Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:19:28.815636 dockerd[1635]: time="2025-01-17T12:19:28.815588513Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:19:28.815704 dockerd[1635]: time="2025-01-17T12:19:28.815685673Z" level=info msg="Daemon has completed initialization" Jan 17 12:19:28.844171 dockerd[1635]: time="2025-01-17T12:19:28.844108873Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:19:28.844361 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:19:29.567141 containerd[1436]: time="2025-01-17T12:19:29.567093513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:19:30.219275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294811917.mount: Deactivated successfully. Jan 17 12:19:31.283624 containerd[1436]: time="2025-01-17T12:19:31.283567593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:31.284131 containerd[1436]: time="2025-01-17T12:19:31.284095633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 17 12:19:31.284877 containerd[1436]: time="2025-01-17T12:19:31.284835473Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:31.287908 containerd[1436]: time="2025-01-17T12:19:31.287875633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:31.290336 containerd[1436]: time="2025-01-17T12:19:31.290280473Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.72314004s" Jan 17 12:19:31.290336 containerd[1436]: time="2025-01-17T12:19:31.290329633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 17 12:19:31.309544 containerd[1436]: time="2025-01-17T12:19:31.309496753Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:19:32.124467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:19:32.132180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:32.225915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:32.229388 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:32.262886 kubelet[1860]: E0117 12:19:32.262827 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:32.265787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:32.265957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:33.019021 containerd[1436]: time="2025-01-17T12:19:33.018959953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:33.019862 containerd[1436]: time="2025-01-17T12:19:33.019792313Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 17 12:19:33.020565 containerd[1436]: time="2025-01-17T12:19:33.020526833Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:33.023265 containerd[1436]: time="2025-01-17T12:19:33.023232673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:33.024343 containerd[1436]: time="2025-01-17T12:19:33.024309713Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.71477712s" Jan 17 12:19:33.024377 containerd[1436]: time="2025-01-17T12:19:33.024343673Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 17 12:19:33.042630 containerd[1436]: time="2025-01-17T12:19:33.042596113Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:19:34.130419 containerd[1436]: time="2025-01-17T12:19:34.130370793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.131441 containerd[1436]: time="2025-01-17T12:19:34.131082553Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 17 12:19:34.133901 containerd[1436]: time="2025-01-17T12:19:34.131907953Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.134713 containerd[1436]: time="2025-01-17T12:19:34.134683273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.135988 containerd[1436]: time="2025-01-17T12:19:34.135937353Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.09330708s" Jan 17 12:19:34.135988 containerd[1436]: time="2025-01-17T12:19:34.135967753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 17 12:19:34.153200 containerd[1436]: time="2025-01-17T12:19:34.153172673Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:19:35.094083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986460007.mount: Deactivated successfully. Jan 17 12:19:35.401024 containerd[1436]: time="2025-01-17T12:19:35.400961913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:35.401566 containerd[1436]: time="2025-01-17T12:19:35.401529153Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 17 12:19:35.402316 containerd[1436]: time="2025-01-17T12:19:35.402292273Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:35.404342 containerd[1436]: time="2025-01-17T12:19:35.404313993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:35.405029 containerd[1436]: time="2025-01-17T12:19:35.404898633Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.25169388s" Jan 17 12:19:35.405029 containerd[1436]: time="2025-01-17T12:19:35.404928273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 17 12:19:35.422428 containerd[1436]: time="2025-01-17T12:19:35.422387753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:19:36.136191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817424096.mount: Deactivated successfully. Jan 17 12:19:36.689030 containerd[1436]: time="2025-01-17T12:19:36.688851273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.689886 containerd[1436]: time="2025-01-17T12:19:36.689690633Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 17 12:19:36.690738 containerd[1436]: time="2025-01-17T12:19:36.690668673Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.693725 containerd[1436]: time="2025-01-17T12:19:36.693671673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.694913 containerd[1436]: time="2025-01-17T12:19:36.694882193Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.27246036s" Jan 17 12:19:36.694957 containerd[1436]: time="2025-01-17T12:19:36.694913793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:19:36.712929 containerd[1436]: time="2025-01-17T12:19:36.712897633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:19:37.129320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339161732.mount: Deactivated successfully. Jan 17 12:19:37.133879 containerd[1436]: time="2025-01-17T12:19:37.133820233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:37.134311 containerd[1436]: time="2025-01-17T12:19:37.134276673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 17 12:19:37.135701 containerd[1436]: time="2025-01-17T12:19:37.135662073Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:37.137665 containerd[1436]: time="2025-01-17T12:19:37.137619313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:37.138730 containerd[1436]: time="2025-01-17T12:19:37.138689633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 425.75304ms" Jan 17 12:19:37.138730 containerd[1436]: time="2025-01-17T12:19:37.138723673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:19:37.157008 containerd[1436]: time="2025-01-17T12:19:37.156977953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:19:37.706819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745680094.mount: Deactivated successfully. Jan 17 12:19:39.660072 containerd[1436]: time="2025-01-17T12:19:39.659879993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:39.661051 containerd[1436]: time="2025-01-17T12:19:39.660757873Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 17 12:19:39.664701 containerd[1436]: time="2025-01-17T12:19:39.664642993Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:39.667853 containerd[1436]: time="2025-01-17T12:19:39.667788593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:39.669104 containerd[1436]: time="2025-01-17T12:19:39.669068553Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.51194048s" Jan 17 12:19:39.669104 containerd[1436]: time="2025-01-17T12:19:39.669100873Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 17 12:19:42.374474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:19:42.384016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:42.473450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:42.476570 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:42.512670 kubelet[2095]: E0117 12:19:42.512624 2095 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:42.515190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:42.515329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:44.323713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:44.342190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:44.359662 systemd[1]: Reloading requested from client PID 2110 ('systemctl') (unit session-7.scope)... Jan 17 12:19:44.359676 systemd[1]: Reloading... Jan 17 12:19:44.428875 zram_generator::config[2149]: No configuration found. Jan 17 12:19:44.629238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:44.680212 systemd[1]: Reloading finished in 320 ms. Jan 17 12:19:44.719914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:44.721745 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:44.723567 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:19:44.723744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:44.725210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:44.813152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:44.817352 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:44.863818 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:44.863818 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:44.863818 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:44.864654 kubelet[2196]: I0117 12:19:44.864598 2196 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:45.501163 kubelet[2196]: I0117 12:19:45.500011 2196 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:19:45.501163 kubelet[2196]: I0117 12:19:45.500042 2196 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:45.501163 kubelet[2196]: I0117 12:19:45.500233 2196 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:19:45.527096 kubelet[2196]: E0117 12:19:45.527055 2196 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.527200 kubelet[2196]: I0117 12:19:45.527168 2196 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:45.534760 kubelet[2196]: I0117 12:19:45.534733 2196 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:45.535888 kubelet[2196]: I0117 12:19:45.535830 2196 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:45.536050 kubelet[2196]: I0117 12:19:45.535886 2196 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:19:45.536130 kubelet[2196]: I0117 12:19:45.536115 2196 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:45.536130 kubelet[2196]: I0117 12:19:45.536124 2196 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:19:45.536382 kubelet[2196]: I0117 12:19:45.536357 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:45.537190 kubelet[2196]: I0117 12:19:45.537169 2196 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:19:45.537190 kubelet[2196]: I0117 12:19:45.537189 2196 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:45.537544 kubelet[2196]: I0117 12:19:45.537376 2196 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:19:45.537585 kubelet[2196]: I0117 12:19:45.537566 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:45.538511 kubelet[2196]: I0117 12:19:45.538475 2196 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:45.539015 kubelet[2196]: I0117 12:19:45.538820 2196 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:45.539015 kubelet[2196]: W0117 12:19:45.538874 2196 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:19:45.539015 kubelet[2196]: W0117 12:19:45.538925 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.539015 kubelet[2196]: E0117 12:19:45.538978 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.539151 kubelet[2196]: W0117 12:19:45.539020 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.539151 kubelet[2196]: E0117 12:19:45.539092 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.539765 kubelet[2196]: I0117 12:19:45.539745 2196 server.go:1264] "Started kubelet" Jan 17 12:19:45.540974 kubelet[2196]: I0117 12:19:45.540946 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:45.542867 kubelet[2196]: I0117 12:19:45.541344 2196 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:45.542867 kubelet[2196]: I0117 12:19:45.542432 2196 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:19:45.543304 kubelet[2196]: I0117 12:19:45.543260 2196 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:45.543550 kubelet[2196]: I0117 12:19:45.543534 2196 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:45.550486 kubelet[2196]: E0117 12:19:45.550146 2196 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b7a26259b03d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:19:45.539720153 +0000 UTC m=+0.718850281,LastTimestamp:2025-01-17 12:19:45.539720153 +0000 UTC m=+0.718850281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:19:45.551100 kubelet[2196]: I0117 12:19:45.551078 2196 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:19:45.551348 kubelet[2196]: I0117 12:19:45.551320 2196 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:19:45.551575 kubelet[2196]: I0117 12:19:45.551558 2196 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:19:45.551974 kubelet[2196]: W0117 12:19:45.551931 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.552029 kubelet[2196]: E0117 12:19:45.551980 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.552739 kubelet[2196]: E0117 12:19:45.552701 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Jan 17 12:19:45.553697 kubelet[2196]: I0117 12:19:45.553673 2196 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:45.553784 kubelet[2196]: I0117 12:19:45.553762 2196 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:45.554622 kubelet[2196]: E0117 12:19:45.554595 2196 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:45.555621 kubelet[2196]: I0117 12:19:45.555595 2196 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:45.565131 kubelet[2196]: I0117 12:19:45.565099 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:45.566245 kubelet[2196]: I0117 12:19:45.566220 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:45.566485 kubelet[2196]: I0117 12:19:45.566368 2196 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:45.566485 kubelet[2196]: I0117 12:19:45.566389 2196 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:19:45.566485 kubelet[2196]: E0117 12:19:45.566427 2196 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:19:45.566485 kubelet[2196]: I0117 12:19:45.566469 2196 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:45.566485 kubelet[2196]: I0117 12:19:45.566483 2196 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:45.566607 kubelet[2196]: I0117 12:19:45.566505 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:45.567205 kubelet[2196]: W0117 12:19:45.566983 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.567205 kubelet[2196]: E0117 12:19:45.567037 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:45.629506 kubelet[2196]: I0117 12:19:45.629463 2196 policy_none.go:49] "None policy: Start" Jan 17 12:19:45.630200 kubelet[2196]: I0117 12:19:45.630175 2196 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:45.630200 kubelet[2196]: I0117 12:19:45.630203 2196 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:45.636300 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:19:45.652578 kubelet[2196]: I0117 12:19:45.652538 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:45.652929 kubelet[2196]: E0117 12:19:45.652892 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 17 12:19:45.655085 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:19:45.657570 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:19:45.667335 kubelet[2196]: E0117 12:19:45.667298 2196 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:19:45.667604 kubelet[2196]: I0117 12:19:45.667583 2196 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:45.667984 kubelet[2196]: I0117 12:19:45.667748 2196 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:19:45.667984 kubelet[2196]: I0117 12:19:45.667873 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:45.669309 kubelet[2196]: E0117 12:19:45.669286 2196 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:19:45.753250 kubelet[2196]: E0117 12:19:45.753146 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Jan 17 12:19:45.854673 kubelet[2196]: I0117 12:19:45.854643 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:45.855005 kubelet[2196]: E0117 12:19:45.854970 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 17 12:19:45.867840 kubelet[2196]: I0117 12:19:45.867801 2196 topology_manager.go:215] "Topology Admit Handler" podUID="15e2315aace825a7532dd9768ecd8aa6" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:19:45.868739 kubelet[2196]: I0117 12:19:45.868698 2196 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:19:45.869516 kubelet[2196]: I0117 12:19:45.869476 2196 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:19:45.875291 systemd[1]: Created slice kubepods-burstable-pod15e2315aace825a7532dd9768ecd8aa6.slice - libcontainer container kubepods-burstable-pod15e2315aace825a7532dd9768ecd8aa6.slice. Jan 17 12:19:45.883413 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 17 12:19:45.896879 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 17 12:19:45.952730 kubelet[2196]: I0117 12:19:45.952701 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:45.952827 kubelet[2196]: I0117 12:19:45.952750 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:19:45.952827 kubelet[2196]: I0117 12:19:45.952777 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15e2315aace825a7532dd9768ecd8aa6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15e2315aace825a7532dd9768ecd8aa6\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:45.952827 kubelet[2196]: I0117 12:19:45.952794 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15e2315aace825a7532dd9768ecd8aa6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15e2315aace825a7532dd9768ecd8aa6\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:45.952827 kubelet[2196]: I0117 12:19:45.952810 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:45.952827 kubelet[2196]: I0117 12:19:45.952825 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:45.952951 kubelet[2196]: I0117 12:19:45.952838 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:45.952951 kubelet[2196]: I0117 12:19:45.952864 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:45.952951 kubelet[2196]: I0117 12:19:45.952879 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15e2315aace825a7532dd9768ecd8aa6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15e2315aace825a7532dd9768ecd8aa6\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:46.153756 kubelet[2196]: E0117 12:19:46.153708 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Jan 17 12:19:46.182069 kubelet[2196]: E0117 12:19:46.182039 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:46.182663 containerd[1436]: time="2025-01-17T12:19:46.182615833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15e2315aace825a7532dd9768ecd8aa6,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:46.195898 kubelet[2196]: E0117 12:19:46.195867 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:46.196265 containerd[1436]: time="2025-01-17T12:19:46.196234553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:46.199856 kubelet[2196]: E0117 12:19:46.199809 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:46.200159 containerd[1436]: time="2025-01-17T12:19:46.200130953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:46.256493 kubelet[2196]: I0117 12:19:46.256468 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:46.256751 kubelet[2196]: E0117 12:19:46.256731 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 17 12:19:46.674603 kubelet[2196]: W0117 12:19:46.674465 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.674603 kubelet[2196]: E0117 12:19:46.674606 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.675948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646150160.mount: Deactivated successfully. Jan 17 12:19:46.680866 containerd[1436]: time="2025-01-17T12:19:46.680715153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:46.682423 containerd[1436]: time="2025-01-17T12:19:46.682304953Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:46.683349 containerd[1436]: time="2025-01-17T12:19:46.683180473Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:46.684535 containerd[1436]: time="2025-01-17T12:19:46.684501953Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:46.685244 containerd[1436]: time="2025-01-17T12:19:46.685213433Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:46.686335 containerd[1436]: time="2025-01-17T12:19:46.686291633Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:46.686968 containerd[1436]: time="2025-01-17T12:19:46.686946353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 17 12:19:46.692221 containerd[1436]: time="2025-01-17T12:19:46.692178473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:46.693810 containerd[1436]: time="2025-01-17T12:19:46.693689833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.5006ms" Jan 17 12:19:46.694524 containerd[1436]: time="2025-01-17T12:19:46.694430033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.12832ms" Jan 17 12:19:46.697886 containerd[1436]: time="2025-01-17T12:19:46.697838753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.13844ms" Jan 17 12:19:46.774205 kubelet[2196]: W0117 12:19:46.774092 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.774205 kubelet[2196]: E0117 12:19:46.774142 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.824586 containerd[1436]: time="2025-01-17T12:19:46.824490673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:46.824586 containerd[1436]: time="2025-01-17T12:19:46.824554033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:46.824586 containerd[1436]: time="2025-01-17T12:19:46.824570513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:46.824887 containerd[1436]: time="2025-01-17T12:19:46.824646233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:46.824887 containerd[1436]: time="2025-01-17T12:19:46.824802833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:46.824887 containerd[1436]: time="2025-01-17T12:19:46.824866913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:46.824887 containerd[1436]: time="2025-01-17T12:19:46.824883033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:46.826918 containerd[1436]: time="2025-01-17T12:19:46.824949793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:46.829030 containerd[1436]: time="2025-01-17T12:19:46.828551553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:46.829030 containerd[1436]: time="2025-01-17T12:19:46.828918953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:46.829030 containerd[1436]: time="2025-01-17T12:19:46.828931553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:46.829030 containerd[1436]: time="2025-01-17T12:19:46.829005433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:46.850080 systemd[1]: Started cri-containerd-1de82cd350d4a68af9e4c9c4622a590fa3c0ef27deae873ec22b12e83e4c09a6.scope - libcontainer container 1de82cd350d4a68af9e4c9c4622a590fa3c0ef27deae873ec22b12e83e4c09a6. Jan 17 12:19:46.854406 systemd[1]: Started cri-containerd-3b0e49288bfafbd2c620366d8578bb989fef90b1f7dfd222ed64f72a39f19ea6.scope - libcontainer container 3b0e49288bfafbd2c620366d8578bb989fef90b1f7dfd222ed64f72a39f19ea6. Jan 17 12:19:46.855704 systemd[1]: Started cri-containerd-a10d4d7e2368ed8280c5a2d8f459415057374eaa1ab27a51f3aa154914929d4a.scope - libcontainer container a10d4d7e2368ed8280c5a2d8f459415057374eaa1ab27a51f3aa154914929d4a. Jan 17 12:19:46.882992 containerd[1436]: time="2025-01-17T12:19:46.882803833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1de82cd350d4a68af9e4c9c4622a590fa3c0ef27deae873ec22b12e83e4c09a6\"" Jan 17 12:19:46.884759 kubelet[2196]: E0117 12:19:46.884463 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:46.885304 containerd[1436]: time="2025-01-17T12:19:46.885274033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15e2315aace825a7532dd9768ecd8aa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a10d4d7e2368ed8280c5a2d8f459415057374eaa1ab27a51f3aa154914929d4a\"" Jan 17 12:19:46.886306 kubelet[2196]: E0117 12:19:46.886143 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:46.888136 containerd[1436]: time="2025-01-17T12:19:46.888105713Z" level=info msg="CreateContainer within sandbox \"1de82cd350d4a68af9e4c9c4622a590fa3c0ef27deae873ec22b12e83e4c09a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:19:46.888692 containerd[1436]: time="2025-01-17T12:19:46.888666953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b0e49288bfafbd2c620366d8578bb989fef90b1f7dfd222ed64f72a39f19ea6\"" Jan 17 12:19:46.889800 kubelet[2196]: E0117 12:19:46.889780 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:46.890534 containerd[1436]: time="2025-01-17T12:19:46.890184033Z" level=info msg="CreateContainer within sandbox \"a10d4d7e2368ed8280c5a2d8f459415057374eaa1ab27a51f3aa154914929d4a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:19:46.891988 containerd[1436]: time="2025-01-17T12:19:46.891959873Z" level=info msg="CreateContainer within sandbox \"3b0e49288bfafbd2c620366d8578bb989fef90b1f7dfd222ed64f72a39f19ea6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:19:46.902690 containerd[1436]: time="2025-01-17T12:19:46.902653913Z" level=info msg="CreateContainer within sandbox \"1de82cd350d4a68af9e4c9c4622a590fa3c0ef27deae873ec22b12e83e4c09a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de5a48becdf383a5d97cc6cb8ae78878dab87a84aa06fc7d1c587422eebee7be\"" Jan 17 12:19:46.903495 containerd[1436]: time="2025-01-17T12:19:46.903439993Z" level=info msg="StartContainer for \"de5a48becdf383a5d97cc6cb8ae78878dab87a84aa06fc7d1c587422eebee7be\"" Jan 17 12:19:46.906220 containerd[1436]: time="2025-01-17T12:19:46.906163433Z" level=info msg="CreateContainer within sandbox \"3b0e49288bfafbd2c620366d8578bb989fef90b1f7dfd222ed64f72a39f19ea6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"101b8c729f6640ae6a7700ed890b4f2319245532b77eeb5cdde7138e82d52cc0\"" Jan 17 12:19:46.906877 containerd[1436]: time="2025-01-17T12:19:46.906566913Z" level=info msg="StartContainer for \"101b8c729f6640ae6a7700ed890b4f2319245532b77eeb5cdde7138e82d52cc0\"" Jan 17 12:19:46.909025 containerd[1436]: time="2025-01-17T12:19:46.908985153Z" level=info msg="CreateContainer within sandbox \"a10d4d7e2368ed8280c5a2d8f459415057374eaa1ab27a51f3aa154914929d4a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23a05ab40fc7926e8d256a6ec41dc642013718cbe85a7678ac23ae418edff94c\"" Jan 17 12:19:46.909394 containerd[1436]: time="2025-01-17T12:19:46.909359993Z" level=info msg="StartContainer for \"23a05ab40fc7926e8d256a6ec41dc642013718cbe85a7678ac23ae418edff94c\"" Jan 17 12:19:46.923568 kubelet[2196]: W0117 12:19:46.923511 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.923724 kubelet[2196]: E0117 12:19:46.923697 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.924980 systemd[1]: Started cri-containerd-de5a48becdf383a5d97cc6cb8ae78878dab87a84aa06fc7d1c587422eebee7be.scope - libcontainer container de5a48becdf383a5d97cc6cb8ae78878dab87a84aa06fc7d1c587422eebee7be. Jan 17 12:19:46.927309 kubelet[2196]: W0117 12:19:46.927181 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.927309 kubelet[2196]: E0117 12:19:46.927231 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Jan 17 12:19:46.929890 systemd[1]: Started cri-containerd-23a05ab40fc7926e8d256a6ec41dc642013718cbe85a7678ac23ae418edff94c.scope - libcontainer container 23a05ab40fc7926e8d256a6ec41dc642013718cbe85a7678ac23ae418edff94c. Jan 17 12:19:46.932277 systemd[1]: Started cri-containerd-101b8c729f6640ae6a7700ed890b4f2319245532b77eeb5cdde7138e82d52cc0.scope - libcontainer container 101b8c729f6640ae6a7700ed890b4f2319245532b77eeb5cdde7138e82d52cc0. Jan 17 12:19:46.954826 kubelet[2196]: E0117 12:19:46.954765 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Jan 17 12:19:47.010571 containerd[1436]: time="2025-01-17T12:19:47.010387353Z" level=info msg="StartContainer for \"101b8c729f6640ae6a7700ed890b4f2319245532b77eeb5cdde7138e82d52cc0\" returns successfully" Jan 17 12:19:47.010571 containerd[1436]: time="2025-01-17T12:19:47.010493873Z" level=info msg="StartContainer for \"de5a48becdf383a5d97cc6cb8ae78878dab87a84aa06fc7d1c587422eebee7be\" returns successfully" Jan 17 12:19:47.010571 containerd[1436]: time="2025-01-17T12:19:47.010401313Z" level=info msg="StartContainer for \"23a05ab40fc7926e8d256a6ec41dc642013718cbe85a7678ac23ae418edff94c\" returns successfully" Jan 17 12:19:47.058268 kubelet[2196]: I0117 12:19:47.057948 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:47.058268 kubelet[2196]: E0117 12:19:47.058240 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 17 12:19:47.580586 kubelet[2196]: E0117 12:19:47.580172 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:47.580586 kubelet[2196]: E0117 12:19:47.580550 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:47.581362 kubelet[2196]: E0117 12:19:47.581338 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:48.585720 kubelet[2196]: E0117 12:19:48.585649 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:48.659346 kubelet[2196]: I0117 12:19:48.659272 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:48.924170 kubelet[2196]: E0117 12:19:48.924127 2196 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:19:48.988919 kubelet[2196]: I0117 12:19:48.988875 2196 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:19:49.023085 kubelet[2196]: E0117 12:19:49.022976 2196 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181b7a26259b03d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:19:45.539720153 +0000 UTC m=+0.718850281,LastTimestamp:2025-01-17 12:19:45.539720153 +0000 UTC m=+0.718850281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:19:49.542161 kubelet[2196]: I0117 12:19:49.541088 2196 apiserver.go:52] "Watching apiserver" Jan 17 12:19:49.552442 kubelet[2196]: I0117 12:19:49.552403 2196 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:19:50.781429 systemd[1]: Reloading requested from client PID 2472 ('systemctl') (unit session-7.scope)... Jan 17 12:19:50.781453 systemd[1]: Reloading... Jan 17 12:19:50.837889 zram_generator::config[2511]: No configuration found. Jan 17 12:19:50.926766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:50.990627 systemd[1]: Reloading finished in 208 ms. Jan 17 12:19:51.025181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:51.039524 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:19:51.040292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:51.040336 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 116.3M memory peak, 0B memory swap peak. Jan 17 12:19:51.050079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:51.145712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:51.150914 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:51.187716 kubelet[2553]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:51.187716 kubelet[2553]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:51.187716 kubelet[2553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:51.188070 kubelet[2553]: I0117 12:19:51.187752 2553 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:51.192866 kubelet[2553]: I0117 12:19:51.192768 2553 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:19:51.192866 kubelet[2553]: I0117 12:19:51.192794 2553 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:51.192974 kubelet[2553]: I0117 12:19:51.192965 2553 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:19:51.194238 kubelet[2553]: I0117 12:19:51.194212 2553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:19:51.195427 kubelet[2553]: I0117 12:19:51.195312 2553 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:51.204116 kubelet[2553]: I0117 12:19:51.204093 2553 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:51.204393 kubelet[2553]: I0117 12:19:51.204365 2553 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:51.204677 kubelet[2553]: I0117 12:19:51.204514 2553 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:19:51.204870 kubelet[2553]: I0117 12:19:51.204783 2553 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:51.204870 kubelet[2553]: I0117 12:19:51.204799 2553 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:19:51.204870 kubelet[2553]: I0117 12:19:51.204832 2553 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:51.205062 kubelet[2553]: I0117 12:19:51.205048 2553 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:19:51.205118 kubelet[2553]: I0117 12:19:51.205110 2553 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:51.205205 kubelet[2553]: I0117 12:19:51.205195 2553 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:19:51.205557 kubelet[2553]: I0117 12:19:51.205257 2553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:51.206253 kubelet[2553]: I0117 12:19:51.206228 2553 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:51.206486 kubelet[2553]: I0117 12:19:51.206469 2553 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:51.207048 kubelet[2553]: I0117 12:19:51.207019 2553 server.go:1264] "Started kubelet" Jan 17 12:19:51.207229 kubelet[2553]: I0117 12:19:51.207198 2553 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:51.207545 kubelet[2553]: I0117 12:19:51.207482 2553 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:51.207811 kubelet[2553]: I0117 12:19:51.207797 2553 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:51.209953 kubelet[2553]: I0117 12:19:51.209910 2553 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:19:51.210385 kubelet[2553]: I0117 12:19:51.210371 2553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:51.218644 kubelet[2553]: I0117 12:19:51.218626 2553 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:19:51.220400 kubelet[2553]: I0117 12:19:51.218833 2553 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:19:51.220886 kubelet[2553]: I0117 12:19:51.220820 2553 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:51.221164 kubelet[2553]: I0117 12:19:51.221139 2553 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:51.221893 kubelet[2553]: I0117 12:19:51.221871 2553 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:19:51.222500 kubelet[2553]: E0117 12:19:51.222477 2553 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:51.222670 kubelet[2553]: I0117 12:19:51.222651 2553 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:51.231956 kubelet[2553]: I0117 12:19:51.231839 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:51.232888 kubelet[2553]: I0117 12:19:51.232826 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:51.232888 kubelet[2553]: I0117 12:19:51.232875 2553 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:51.232888 kubelet[2553]: I0117 12:19:51.232896 2553 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:19:51.233005 kubelet[2553]: E0117 12:19:51.232938 2553 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:19:51.255037 kubelet[2553]: I0117 12:19:51.255017 2553 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:51.255037 kubelet[2553]: I0117 12:19:51.255034 2553 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:51.255143 kubelet[2553]: I0117 12:19:51.255051 2553 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:51.255199 kubelet[2553]: I0117 12:19:51.255182 2553 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:19:51.255229 kubelet[2553]: I0117 12:19:51.255198 2553 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:19:51.255229 kubelet[2553]: I0117 12:19:51.255213 2553 policy_none.go:49] "None policy: Start" Jan 17 12:19:51.256031 kubelet[2553]: I0117 12:19:51.256001 2553 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:51.256031 kubelet[2553]: I0117 12:19:51.256032 2553 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:51.257835 kubelet[2553]: I0117 12:19:51.256188 2553 state_mem.go:75] "Updated machine memory state" Jan 17 12:19:51.262061 kubelet[2553]: I0117 12:19:51.262030 2553 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:51.262225 kubelet[2553]: I0117 12:19:51.262187 2553 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:19:51.262407 kubelet[2553]: I0117 12:19:51.262282 2553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:51.322572 kubelet[2553]: I0117 12:19:51.322481 2553 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:51.328747 kubelet[2553]: I0117 12:19:51.328605 2553 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:19:51.328747 kubelet[2553]: I0117 12:19:51.328677 2553 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:19:51.333780 kubelet[2553]: I0117 12:19:51.333722 2553 topology_manager.go:215] "Topology Admit Handler" podUID="15e2315aace825a7532dd9768ecd8aa6" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:19:51.333885 kubelet[2553]: I0117 12:19:51.333865 2553 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:19:51.333937 kubelet[2553]: I0117 12:19:51.333912 2553 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:19:51.422829 kubelet[2553]: I0117 12:19:51.422772 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:51.422829 kubelet[2553]: I0117 12:19:51.422812 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:51.422829 kubelet[2553]: I0117 12:19:51.422834 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:19:51.423044 kubelet[2553]: I0117 12:19:51.422886 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15e2315aace825a7532dd9768ecd8aa6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15e2315aace825a7532dd9768ecd8aa6\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:51.423044 kubelet[2553]: I0117 12:19:51.422904 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:51.423044 kubelet[2553]: I0117 12:19:51.422924 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:51.423044 kubelet[2553]: I0117 12:19:51.422939 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15e2315aace825a7532dd9768ecd8aa6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15e2315aace825a7532dd9768ecd8aa6\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:51.423044 kubelet[2553]: I0117 12:19:51.422955 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:51.423160 kubelet[2553]: I0117 12:19:51.422969 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15e2315aace825a7532dd9768ecd8aa6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15e2315aace825a7532dd9768ecd8aa6\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:51.722739 kubelet[2553]: E0117 12:19:51.722690 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:51.722890 kubelet[2553]: E0117 12:19:51.722691 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:51.722890 kubelet[2553]: E0117 12:19:51.722690 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:52.206378 kubelet[2553]: I0117 12:19:52.206336 2553 apiserver.go:52] "Watching apiserver" Jan 17 12:19:52.220759 kubelet[2553]: I0117 12:19:52.220722 2553 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:19:52.244184 kubelet[2553]: E0117 12:19:52.243735 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:52.248477 kubelet[2553]: E0117 12:19:52.248423 2553 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 12:19:52.249072 kubelet[2553]: E0117 12:19:52.249046 2553 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:52.249646 kubelet[2553]: E0117 12:19:52.249613 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:52.250743 kubelet[2553]: E0117 12:19:52.250723 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:52.261478 kubelet[2553]: I0117 12:19:52.261425 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.261412033 podStartE2EDuration="1.261412033s" podCreationTimestamp="2025-01-17 12:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:52.261392113 +0000 UTC m=+1.107453361" watchObservedRunningTime="2025-01-17 12:19:52.261412033 +0000 UTC m=+1.107473281" Jan 17 12:19:52.269146 kubelet[2553]: I0117 12:19:52.268992 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.268977633 podStartE2EDuration="1.268977633s" podCreationTimestamp="2025-01-17 12:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:52.268939873 +0000 UTC m=+1.115001121" watchObservedRunningTime="2025-01-17 12:19:52.268977633 +0000 UTC m=+1.115038881" Jan 17 12:19:52.284925 kubelet[2553]: I0117 12:19:52.284856 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.284810033 podStartE2EDuration="1.284810033s" podCreationTimestamp="2025-01-17 12:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:52.275543073 +0000 UTC m=+1.121604321" watchObservedRunningTime="2025-01-17 12:19:52.284810033 +0000 UTC m=+1.130871281" Jan 17 12:19:53.256688 kubelet[2553]: E0117 12:19:53.256640 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:53.256688 kubelet[2553]: E0117 12:19:53.256673 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:54.498040 kubelet[2553]: E0117 12:19:54.497953 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:55.978985 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:55.980993 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:55.984399 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:57474.service: Deactivated successfully. Jan 17 12:19:55.985917 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:19:55.986072 systemd[1]: session-7.scope: Consumed 6.750s CPU time, 189.3M memory peak, 0B memory swap peak. Jan 17 12:19:55.986491 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:19:55.987554 systemd-logind[1420]: Removed session 7. Jan 17 12:19:59.414213 kubelet[2553]: E0117 12:19:59.414143 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:00.155956 kubelet[2553]: E0117 12:20:00.155922 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:00.255570 kubelet[2553]: E0117 12:20:00.255316 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:00.255570 kubelet[2553]: E0117 12:20:00.255506 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:04.505730 kubelet[2553]: E0117 12:20:04.505692 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:05.239422 kubelet[2553]: I0117 12:20:05.239378 2553 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:20:05.251768 containerd[1436]: time="2025-01-17T12:20:05.251706004Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:20:05.252208 kubelet[2553]: I0117 12:20:05.252006 2553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:20:05.379197 kubelet[2553]: I0117 12:20:05.379159 2553 topology_manager.go:215] "Topology Admit Handler" podUID="84cc6615-1e71-4c23-839f-e84d08c0ec78" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-v2ngr" Jan 17 12:20:05.388027 systemd[1]: Created slice kubepods-besteffort-pod84cc6615_1e71_4c23_839f_e84d08c0ec78.slice - libcontainer container kubepods-besteffort-pod84cc6615_1e71_4c23_839f_e84d08c0ec78.slice. Jan 17 12:20:05.420749 kubelet[2553]: I0117 12:20:05.420680 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84cc6615-1e71-4c23-839f-e84d08c0ec78-var-lib-calico\") pod \"tigera-operator-7bc55997bb-v2ngr\" (UID: \"84cc6615-1e71-4c23-839f-e84d08c0ec78\") " pod="tigera-operator/tigera-operator-7bc55997bb-v2ngr" Jan 17 12:20:05.420749 kubelet[2553]: I0117 12:20:05.420719 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fqd\" (UniqueName: \"kubernetes.io/projected/84cc6615-1e71-4c23-839f-e84d08c0ec78-kube-api-access-87fqd\") pod \"tigera-operator-7bc55997bb-v2ngr\" (UID: \"84cc6615-1e71-4c23-839f-e84d08c0ec78\") " pod="tigera-operator/tigera-operator-7bc55997bb-v2ngr" Jan 17 12:20:05.545905 update_engine[1424]: I20250117 12:20:05.545786 1424 update_attempter.cc:509] Updating boot flags... Jan 17 12:20:05.571985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2649) Jan 17 12:20:05.609875 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2647) Jan 17 12:20:05.636008 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2647) Jan 17 12:20:05.717600 containerd[1436]: time="2025-01-17T12:20:05.717329843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-v2ngr,Uid:84cc6615-1e71-4c23-839f-e84d08c0ec78,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:20:05.722477 kubelet[2553]: I0117 12:20:05.722439 2553 topology_manager.go:215] "Topology Admit Handler" podUID="5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319" podNamespace="kube-system" podName="kube-proxy-f66gx" Jan 17 12:20:05.738383 systemd[1]: Created slice kubepods-besteffort-pod5fa761b7_c9de_4e8e_8cc7_a4cbc9f6d319.slice - libcontainer container kubepods-besteffort-pod5fa761b7_c9de_4e8e_8cc7_a4cbc9f6d319.slice. Jan 17 12:20:05.752834 containerd[1436]: time="2025-01-17T12:20:05.752718464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:05.752834 containerd[1436]: time="2025-01-17T12:20:05.752777784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:05.753236 containerd[1436]: time="2025-01-17T12:20:05.752804744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:05.754076 containerd[1436]: time="2025-01-17T12:20:05.753339064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:05.777017 systemd[1]: Started cri-containerd-213402ec11b764bea64d3939e2c5a8cacfda3a3b213bbe199fb632ffd9f73976.scope - libcontainer container 213402ec11b764bea64d3939e2c5a8cacfda3a3b213bbe199fb632ffd9f73976. Jan 17 12:20:05.802885 containerd[1436]: time="2025-01-17T12:20:05.802664574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-v2ngr,Uid:84cc6615-1e71-4c23-839f-e84d08c0ec78,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"213402ec11b764bea64d3939e2c5a8cacfda3a3b213bbe199fb632ffd9f73976\"" Jan 17 12:20:05.806492 containerd[1436]: time="2025-01-17T12:20:05.806443616Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:20:05.822956 kubelet[2553]: I0117 12:20:05.822919 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319-kube-proxy\") pod \"kube-proxy-f66gx\" (UID: \"5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319\") " pod="kube-system/kube-proxy-f66gx" Jan 17 12:20:05.823049 kubelet[2553]: I0117 12:20:05.822961 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319-xtables-lock\") pod \"kube-proxy-f66gx\" (UID: \"5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319\") " pod="kube-system/kube-proxy-f66gx" Jan 17 12:20:05.823049 kubelet[2553]: I0117 12:20:05.822980 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319-lib-modules\") pod \"kube-proxy-f66gx\" (UID: \"5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319\") " pod="kube-system/kube-proxy-f66gx" Jan 17 12:20:05.823049 kubelet[2553]: I0117 12:20:05.822996 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d96xj\" (UniqueName: \"kubernetes.io/projected/5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319-kube-api-access-d96xj\") pod \"kube-proxy-f66gx\" (UID: \"5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319\") " pod="kube-system/kube-proxy-f66gx" Jan 17 12:20:06.041160 kubelet[2553]: E0117 12:20:06.041117 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:06.041959 containerd[1436]: time="2025-01-17T12:20:06.041562035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f66gx,Uid:5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:06.061066 containerd[1436]: time="2025-01-17T12:20:06.060592726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:06.061066 containerd[1436]: time="2025-01-17T12:20:06.060986606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:06.061066 containerd[1436]: time="2025-01-17T12:20:06.061000606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:06.061357 containerd[1436]: time="2025-01-17T12:20:06.061076566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:06.085059 systemd[1]: Started cri-containerd-fd3e91b36cc2f5985069982c1106f9470d5eac3e7f0402cded4725e506a3eb95.scope - libcontainer container fd3e91b36cc2f5985069982c1106f9470d5eac3e7f0402cded4725e506a3eb95. Jan 17 12:20:06.102383 containerd[1436]: time="2025-01-17T12:20:06.102331189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f66gx,Uid:5fa761b7-c9de-4e8e-8cc7-a4cbc9f6d319,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd3e91b36cc2f5985069982c1106f9470d5eac3e7f0402cded4725e506a3eb95\"" Jan 17 12:20:06.103112 kubelet[2553]: E0117 12:20:06.102919 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:06.113271 containerd[1436]: time="2025-01-17T12:20:06.113237915Z" level=info msg="CreateContainer within sandbox \"fd3e91b36cc2f5985069982c1106f9470d5eac3e7f0402cded4725e506a3eb95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:20:06.142530 containerd[1436]: time="2025-01-17T12:20:06.142453851Z" level=info msg="CreateContainer within sandbox \"fd3e91b36cc2f5985069982c1106f9470d5eac3e7f0402cded4725e506a3eb95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1adbe789e1409ac56cd2d77af9e5dcb4574791ec383b76cc3784d1f87145170f\"" Jan 17 12:20:06.145729 containerd[1436]: time="2025-01-17T12:20:06.145697893Z" level=info msg="StartContainer for \"1adbe789e1409ac56cd2d77af9e5dcb4574791ec383b76cc3784d1f87145170f\"" Jan 17 12:20:06.168989 systemd[1]: Started cri-containerd-1adbe789e1409ac56cd2d77af9e5dcb4574791ec383b76cc3784d1f87145170f.scope - libcontainer container 1adbe789e1409ac56cd2d77af9e5dcb4574791ec383b76cc3784d1f87145170f. Jan 17 12:20:06.195006 containerd[1436]: time="2025-01-17T12:20:06.194968041Z" level=info msg="StartContainer for \"1adbe789e1409ac56cd2d77af9e5dcb4574791ec383b76cc3784d1f87145170f\" returns successfully" Jan 17 12:20:06.271091 kubelet[2553]: E0117 12:20:06.270977 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:06.281861 kubelet[2553]: I0117 12:20:06.281795 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f66gx" podStartSLOduration=1.281777209 podStartE2EDuration="1.281777209s" podCreationTimestamp="2025-01-17 12:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:06.281674809 +0000 UTC m=+15.127736057" watchObservedRunningTime="2025-01-17 12:20:06.281777209 +0000 UTC m=+15.127838457" Jan 17 12:20:10.191078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501558447.mount: Deactivated successfully. Jan 17 12:20:10.644218 containerd[1436]: time="2025-01-17T12:20:10.643971770Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:10.645222 containerd[1436]: time="2025-01-17T12:20:10.645036171Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125936" Jan 17 12:20:10.645940 containerd[1436]: time="2025-01-17T12:20:10.645907331Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:10.648860 containerd[1436]: time="2025-01-17T12:20:10.648817252Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:10.649391 containerd[1436]: time="2025-01-17T12:20:10.649357133Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.842855597s" Jan 17 12:20:10.649463 containerd[1436]: time="2025-01-17T12:20:10.649391973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 12:20:10.659244 containerd[1436]: time="2025-01-17T12:20:10.659198257Z" level=info msg="CreateContainer within sandbox \"213402ec11b764bea64d3939e2c5a8cacfda3a3b213bbe199fb632ffd9f73976\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:20:10.671069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3368559897.mount: Deactivated successfully. Jan 17 12:20:10.678942 containerd[1436]: time="2025-01-17T12:20:10.678895825Z" level=info msg="CreateContainer within sandbox \"213402ec11b764bea64d3939e2c5a8cacfda3a3b213bbe199fb632ffd9f73976\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0e3039441af15cf808e5c51198d027dd6de710ed1663253cd337bcb250ffffc7\"" Jan 17 12:20:10.679620 containerd[1436]: time="2025-01-17T12:20:10.679447106Z" level=info msg="StartContainer for \"0e3039441af15cf808e5c51198d027dd6de710ed1663253cd337bcb250ffffc7\"" Jan 17 12:20:10.707032 systemd[1]: Started cri-containerd-0e3039441af15cf808e5c51198d027dd6de710ed1663253cd337bcb250ffffc7.scope - libcontainer container 0e3039441af15cf808e5c51198d027dd6de710ed1663253cd337bcb250ffffc7. Jan 17 12:20:10.728185 containerd[1436]: time="2025-01-17T12:20:10.728053167Z" level=info msg="StartContainer for \"0e3039441af15cf808e5c51198d027dd6de710ed1663253cd337bcb250ffffc7\" returns successfully" Jan 17 12:20:11.306059 kubelet[2553]: I0117 12:20:11.305990 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-v2ngr" podStartSLOduration=1.4577902489999999 podStartE2EDuration="6.305968328s" podCreationTimestamp="2025-01-17 12:20:05 +0000 UTC" firstStartedPulling="2025-01-17 12:20:05.805988736 +0000 UTC m=+14.652049984" lastFinishedPulling="2025-01-17 12:20:10.654166815 +0000 UTC m=+19.500228063" observedRunningTime="2025-01-17 12:20:11.305831008 +0000 UTC m=+20.151892256" watchObservedRunningTime="2025-01-17 12:20:11.305968328 +0000 UTC m=+20.152029536" Jan 17 12:20:14.874139 kubelet[2553]: I0117 12:20:14.874090 2553 topology_manager.go:215] "Topology Admit Handler" podUID="c0270584-604f-4f41-951e-5cbb44836516" podNamespace="calico-system" podName="calico-typha-7955c75468-7fhxw" Jan 17 12:20:14.885004 kubelet[2553]: I0117 12:20:14.884958 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgtck\" (UniqueName: \"kubernetes.io/projected/c0270584-604f-4f41-951e-5cbb44836516-kube-api-access-lgtck\") pod \"calico-typha-7955c75468-7fhxw\" (UID: \"c0270584-604f-4f41-951e-5cbb44836516\") " pod="calico-system/calico-typha-7955c75468-7fhxw" Jan 17 12:20:14.885004 kubelet[2553]: I0117 12:20:14.885001 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0270584-604f-4f41-951e-5cbb44836516-tigera-ca-bundle\") pod \"calico-typha-7955c75468-7fhxw\" (UID: \"c0270584-604f-4f41-951e-5cbb44836516\") " pod="calico-system/calico-typha-7955c75468-7fhxw" Jan 17 12:20:14.885247 kubelet[2553]: I0117 12:20:14.885025 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c0270584-604f-4f41-951e-5cbb44836516-typha-certs\") pod \"calico-typha-7955c75468-7fhxw\" (UID: \"c0270584-604f-4f41-951e-5cbb44836516\") " pod="calico-system/calico-typha-7955c75468-7fhxw" Jan 17 12:20:14.888532 systemd[1]: Created slice kubepods-besteffort-podc0270584_604f_4f41_951e_5cbb44836516.slice - libcontainer container kubepods-besteffort-podc0270584_604f_4f41_951e_5cbb44836516.slice. Jan 17 12:20:14.930970 kubelet[2553]: I0117 12:20:14.930911 2553 topology_manager.go:215] "Topology Admit Handler" podUID="a37de252-7ccd-48ab-9ce4-f15fbcea1a68" podNamespace="calico-system" podName="calico-node-gx4lj" Jan 17 12:20:14.938099 systemd[1]: Created slice kubepods-besteffort-poda37de252_7ccd_48ab_9ce4_f15fbcea1a68.slice - libcontainer container kubepods-besteffort-poda37de252_7ccd_48ab_9ce4_f15fbcea1a68.slice. Jan 17 12:20:14.985218 kubelet[2553]: I0117 12:20:14.985170 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-lib-modules\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985218 kubelet[2553]: I0117 12:20:14.985207 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-policysync\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985364 kubelet[2553]: I0117 12:20:14.985227 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-tigera-ca-bundle\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985364 kubelet[2553]: I0117 12:20:14.985242 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-cni-bin-dir\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985364 kubelet[2553]: I0117 12:20:14.985258 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vrl\" (UniqueName: \"kubernetes.io/projected/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-kube-api-access-t9vrl\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985364 kubelet[2553]: I0117 12:20:14.985276 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-var-lib-calico\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985364 kubelet[2553]: I0117 12:20:14.985290 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-flexvol-driver-host\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985499 kubelet[2553]: I0117 12:20:14.985305 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-xtables-lock\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985499 kubelet[2553]: I0117 12:20:14.985322 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-node-certs\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985499 kubelet[2553]: I0117 12:20:14.985336 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-var-run-calico\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985499 kubelet[2553]: I0117 12:20:14.985350 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-cni-net-dir\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:14.985499 kubelet[2553]: I0117 12:20:14.985364 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a37de252-7ccd-48ab-9ce4-f15fbcea1a68-cni-log-dir\") pod \"calico-node-gx4lj\" (UID: \"a37de252-7ccd-48ab-9ce4-f15fbcea1a68\") " pod="calico-system/calico-node-gx4lj" Jan 17 12:20:15.058896 kubelet[2553]: I0117 12:20:15.058853 2553 topology_manager.go:215] "Topology Admit Handler" podUID="498cd002-4959-4e1e-94d0-79dfca8e8ebe" podNamespace="calico-system" podName="csi-node-driver-hjdnv" Jan 17 12:20:15.059685 kubelet[2553]: E0117 12:20:15.059175 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjdnv" podUID="498cd002-4959-4e1e-94d0-79dfca8e8ebe" Jan 17 12:20:15.085678 kubelet[2553]: I0117 12:20:15.085626 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/498cd002-4959-4e1e-94d0-79dfca8e8ebe-registration-dir\") pod \"csi-node-driver-hjdnv\" (UID: \"498cd002-4959-4e1e-94d0-79dfca8e8ebe\") " pod="calico-system/csi-node-driver-hjdnv" Jan 17 12:20:15.086455 kubelet[2553]: I0117 12:20:15.086336 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5hl\" (UniqueName: \"kubernetes.io/projected/498cd002-4959-4e1e-94d0-79dfca8e8ebe-kube-api-access-vm5hl\") pod \"csi-node-driver-hjdnv\" (UID: \"498cd002-4959-4e1e-94d0-79dfca8e8ebe\") " pod="calico-system/csi-node-driver-hjdnv" Jan 17 12:20:15.086455 kubelet[2553]: I0117 12:20:15.086389 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/498cd002-4959-4e1e-94d0-79dfca8e8ebe-socket-dir\") pod \"csi-node-driver-hjdnv\" (UID: \"498cd002-4959-4e1e-94d0-79dfca8e8ebe\") " pod="calico-system/csi-node-driver-hjdnv" Jan 17 12:20:15.086455 kubelet[2553]: I0117 12:20:15.086418 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/498cd002-4959-4e1e-94d0-79dfca8e8ebe-kubelet-dir\") pod \"csi-node-driver-hjdnv\" (UID: \"498cd002-4959-4e1e-94d0-79dfca8e8ebe\") " pod="calico-system/csi-node-driver-hjdnv" Jan 17 12:20:15.087165 kubelet[2553]: I0117 12:20:15.087001 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/498cd002-4959-4e1e-94d0-79dfca8e8ebe-varrun\") pod \"csi-node-driver-hjdnv\" (UID: \"498cd002-4959-4e1e-94d0-79dfca8e8ebe\") " pod="calico-system/csi-node-driver-hjdnv" Jan 17 12:20:15.089390 kubelet[2553]: E0117 12:20:15.089286 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.089390 kubelet[2553]: W0117 12:20:15.089311 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.089390 kubelet[2553]: E0117 12:20:15.089337 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.089697 kubelet[2553]: E0117 12:20:15.089648 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.089745 kubelet[2553]: W0117 12:20:15.089698 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.089771 kubelet[2553]: E0117 12:20:15.089743 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.090605 kubelet[2553]: E0117 12:20:15.090189 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.090605 kubelet[2553]: W0117 12:20:15.090205 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.090605 kubelet[2553]: E0117 12:20:15.090254 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.090605 kubelet[2553]: E0117 12:20:15.090452 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.090605 kubelet[2553]: W0117 12:20:15.090462 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.090605 kubelet[2553]: E0117 12:20:15.090498 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.091003 kubelet[2553]: E0117 12:20:15.090968 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.091003 kubelet[2553]: W0117 12:20:15.090987 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.091148 kubelet[2553]: E0117 12:20:15.091078 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.091346 kubelet[2553]: E0117 12:20:15.091331 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.092692 kubelet[2553]: W0117 12:20:15.091345 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.092773 kubelet[2553]: E0117 12:20:15.092721 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.093046 kubelet[2553]: E0117 12:20:15.092899 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.093046 kubelet[2553]: W0117 12:20:15.092913 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.093046 kubelet[2553]: E0117 12:20:15.092941 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.093399 kubelet[2553]: E0117 12:20:15.093047 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.093399 kubelet[2553]: W0117 12:20:15.093071 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.093399 kubelet[2553]: E0117 12:20:15.093118 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.093399 kubelet[2553]: E0117 12:20:15.093304 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.093399 kubelet[2553]: W0117 12:20:15.093315 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.093399 kubelet[2553]: E0117 12:20:15.093388 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.094407 kubelet[2553]: E0117 12:20:15.093568 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.094407 kubelet[2553]: W0117 12:20:15.093578 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.094407 kubelet[2553]: E0117 12:20:15.093647 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.094407 kubelet[2553]: E0117 12:20:15.093814 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.094407 kubelet[2553]: W0117 12:20:15.093823 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.094407 kubelet[2553]: E0117 12:20:15.093877 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.094407 kubelet[2553]: E0117 12:20:15.094040 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.094407 kubelet[2553]: W0117 12:20:15.094048 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.094407 kubelet[2553]: E0117 12:20:15.094087 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.094407 kubelet[2553]: E0117 12:20:15.094237 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.094829 kubelet[2553]: W0117 12:20:15.094244 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.094829 kubelet[2553]: E0117 12:20:15.094334 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.094829 kubelet[2553]: E0117 12:20:15.094741 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.094829 kubelet[2553]: W0117 12:20:15.094754 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.094829 kubelet[2553]: E0117 12:20:15.094800 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.095619 kubelet[2553]: E0117 12:20:15.095119 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.095619 kubelet[2553]: W0117 12:20:15.095131 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.095619 kubelet[2553]: E0117 12:20:15.095144 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.095619 kubelet[2553]: E0117 12:20:15.095481 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.096101 kubelet[2553]: W0117 12:20:15.095495 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.096407 kubelet[2553]: E0117 12:20:15.096126 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.096407 kubelet[2553]: E0117 12:20:15.096356 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.096407 kubelet[2553]: W0117 12:20:15.096366 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.096407 kubelet[2553]: E0117 12:20:15.096394 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.096706 kubelet[2553]: E0117 12:20:15.096547 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.096706 kubelet[2553]: W0117 12:20:15.096559 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.096706 kubelet[2553]: E0117 12:20:15.096642 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.096706 kubelet[2553]: E0117 12:20:15.096690 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.096706 kubelet[2553]: W0117 12:20:15.096698 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.096978 kubelet[2553]: E0117 12:20:15.096767 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.096978 kubelet[2553]: E0117 12:20:15.096816 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.096978 kubelet[2553]: W0117 12:20:15.096823 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.096978 kubelet[2553]: E0117 12:20:15.096889 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.096978 kubelet[2553]: E0117 12:20:15.096961 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.096978 kubelet[2553]: W0117 12:20:15.096969 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.097113 kubelet[2553]: E0117 12:20:15.097079 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.097455 kubelet[2553]: W0117 12:20:15.097085 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.097455 kubelet[2553]: E0117 12:20:15.097193 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.097455 kubelet[2553]: E0117 12:20:15.097092 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.097815 kubelet[2553]: E0117 12:20:15.097791 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.098008 kubelet[2553]: W0117 12:20:15.097890 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.098008 kubelet[2553]: E0117 12:20:15.097920 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.098731 kubelet[2553]: E0117 12:20:15.098701 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.098731 kubelet[2553]: W0117 12:20:15.098719 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.098731 kubelet[2553]: E0117 12:20:15.098736 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.099235 kubelet[2553]: E0117 12:20:15.099169 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.099235 kubelet[2553]: W0117 12:20:15.099181 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.099235 kubelet[2553]: E0117 12:20:15.099193 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.099862 kubelet[2553]: E0117 12:20:15.099754 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.099862 kubelet[2553]: W0117 12:20:15.099795 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.099862 kubelet[2553]: E0117 12:20:15.099812 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.100656 kubelet[2553]: E0117 12:20:15.100123 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.100656 kubelet[2553]: W0117 12:20:15.100135 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.100656 kubelet[2553]: E0117 12:20:15.100156 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.100656 kubelet[2553]: E0117 12:20:15.100347 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.100656 kubelet[2553]: W0117 12:20:15.100364 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.100656 kubelet[2553]: E0117 12:20:15.100382 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.102786 kubelet[2553]: E0117 12:20:15.102582 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.102786 kubelet[2553]: W0117 12:20:15.102600 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.102786 kubelet[2553]: E0117 12:20:15.102626 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.103401 kubelet[2553]: E0117 12:20:15.102917 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.103401 kubelet[2553]: W0117 12:20:15.102934 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.103401 kubelet[2553]: E0117 12:20:15.103230 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.104394 kubelet[2553]: E0117 12:20:15.104358 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.104394 kubelet[2553]: W0117 12:20:15.104375 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.104394 kubelet[2553]: E0117 12:20:15.104392 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.108288 kubelet[2553]: E0117 12:20:15.108256 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.108288 kubelet[2553]: W0117 12:20:15.108273 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.108390 kubelet[2553]: E0117 12:20:15.108301 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.109100 kubelet[2553]: E0117 12:20:15.108993 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.109100 kubelet[2553]: W0117 12:20:15.109096 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.109219 kubelet[2553]: E0117 12:20:15.109117 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.110867 kubelet[2553]: E0117 12:20:15.109324 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.110867 kubelet[2553]: W0117 12:20:15.109338 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.110867 kubelet[2553]: E0117 12:20:15.109373 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.110867 kubelet[2553]: E0117 12:20:15.109496 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.110867 kubelet[2553]: W0117 12:20:15.109505 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.110867 kubelet[2553]: E0117 12:20:15.109536 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.112032 kubelet[2553]: E0117 12:20:15.112015 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.112032 kubelet[2553]: W0117 12:20:15.112029 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.112545 kubelet[2553]: E0117 12:20:15.112080 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.112545 kubelet[2553]: E0117 12:20:15.112281 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.112545 kubelet[2553]: W0117 12:20:15.112290 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.112545 kubelet[2553]: E0117 12:20:15.112408 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.112545 kubelet[2553]: E0117 12:20:15.112505 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.112545 kubelet[2553]: W0117 12:20:15.112514 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.112545 kubelet[2553]: E0117 12:20:15.112524 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.192698 kubelet[2553]: E0117 12:20:15.192576 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:15.193358 containerd[1436]: time="2025-01-17T12:20:15.193297302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7955c75468-7fhxw,Uid:c0270584-604f-4f41-951e-5cbb44836516,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:15.210469 kubelet[2553]: E0117 12:20:15.210445 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.210469 kubelet[2553]: W0117 12:20:15.210465 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.210680 kubelet[2553]: E0117 12:20:15.210484 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.210759 kubelet[2553]: E0117 12:20:15.210743 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.210759 kubelet[2553]: W0117 12:20:15.210757 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.210822 kubelet[2553]: E0117 12:20:15.210773 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.211008 kubelet[2553]: E0117 12:20:15.210970 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.211008 kubelet[2553]: W0117 12:20:15.210984 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.211008 kubelet[2553]: E0117 12:20:15.211001 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.211226 kubelet[2553]: E0117 12:20:15.211214 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.211226 kubelet[2553]: W0117 12:20:15.211225 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.211289 kubelet[2553]: E0117 12:20:15.211239 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.211448 kubelet[2553]: E0117 12:20:15.211436 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.211448 kubelet[2553]: W0117 12:20:15.211447 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.211504 kubelet[2553]: E0117 12:20:15.211460 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.211691 kubelet[2553]: E0117 12:20:15.211674 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.211691 kubelet[2553]: W0117 12:20:15.211686 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.211691 kubelet[2553]: E0117 12:20:15.211701 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.211915 kubelet[2553]: E0117 12:20:15.211904 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.211915 kubelet[2553]: W0117 12:20:15.211914 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.211994 kubelet[2553]: E0117 12:20:15.211958 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.212096 kubelet[2553]: E0117 12:20:15.212085 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.212096 kubelet[2553]: W0117 12:20:15.212094 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.212157 kubelet[2553]: E0117 12:20:15.212113 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.212251 kubelet[2553]: E0117 12:20:15.212241 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.212251 kubelet[2553]: W0117 12:20:15.212250 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.212316 kubelet[2553]: E0117 12:20:15.212271 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.212416 kubelet[2553]: E0117 12:20:15.212399 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.212416 kubelet[2553]: W0117 12:20:15.212409 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.212475 kubelet[2553]: E0117 12:20:15.212438 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.212576 kubelet[2553]: E0117 12:20:15.212564 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.212576 kubelet[2553]: W0117 12:20:15.212575 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.212646 kubelet[2553]: E0117 12:20:15.212595 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.212914 kubelet[2553]: E0117 12:20:15.212900 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.212914 kubelet[2553]: W0117 12:20:15.212914 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.212994 kubelet[2553]: E0117 12:20:15.212929 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.213133 kubelet[2553]: E0117 12:20:15.213118 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.213161 kubelet[2553]: W0117 12:20:15.213133 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.213161 kubelet[2553]: E0117 12:20:15.213157 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.213321 kubelet[2553]: E0117 12:20:15.213310 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.213321 kubelet[2553]: W0117 12:20:15.213320 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.213385 kubelet[2553]: E0117 12:20:15.213329 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.213636 kubelet[2553]: E0117 12:20:15.213567 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.213636 kubelet[2553]: W0117 12:20:15.213579 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.213636 kubelet[2553]: E0117 12:20:15.213614 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.213792 kubelet[2553]: E0117 12:20:15.213780 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.213792 kubelet[2553]: W0117 12:20:15.213792 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.213872 kubelet[2553]: E0117 12:20:15.213835 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.215090 kubelet[2553]: E0117 12:20:15.215071 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.215195 kubelet[2553]: W0117 12:20:15.215086 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.216834 kubelet[2553]: E0117 12:20:15.215506 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.216834 kubelet[2553]: E0117 12:20:15.215831 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.216834 kubelet[2553]: W0117 12:20:15.215856 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.216834 kubelet[2553]: E0117 12:20:15.215896 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.216834 kubelet[2553]: E0117 12:20:15.216632 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.216834 kubelet[2553]: W0117 12:20:15.216646 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.217139 kubelet[2553]: E0117 12:20:15.217044 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.217650 kubelet[2553]: E0117 12:20:15.217634 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.217650 kubelet[2553]: W0117 12:20:15.217649 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.217723 kubelet[2553]: E0117 12:20:15.217688 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.217906 kubelet[2553]: E0117 12:20:15.217893 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.217906 kubelet[2553]: W0117 12:20:15.217905 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.217979 kubelet[2553]: E0117 12:20:15.217959 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.218238 kubelet[2553]: E0117 12:20:15.218220 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.218238 kubelet[2553]: W0117 12:20:15.218233 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.218302 kubelet[2553]: E0117 12:20:15.218266 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.218629 kubelet[2553]: E0117 12:20:15.218614 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.218672 kubelet[2553]: W0117 12:20:15.218636 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.218672 kubelet[2553]: E0117 12:20:15.218653 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.219113 kubelet[2553]: E0117 12:20:15.219086 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.219113 kubelet[2553]: W0117 12:20:15.219100 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.219196 kubelet[2553]: E0117 12:20:15.219115 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.219376 kubelet[2553]: E0117 12:20:15.219328 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.219376 kubelet[2553]: W0117 12:20:15.219341 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.219376 kubelet[2553]: E0117 12:20:15.219350 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.224630 kubelet[2553]: E0117 12:20:15.224610 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:15.224630 kubelet[2553]: W0117 12:20:15.224628 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:15.224723 kubelet[2553]: E0117 12:20:15.224641 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:15.226854 containerd[1436]: time="2025-01-17T12:20:15.226739752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:15.226854 containerd[1436]: time="2025-01-17T12:20:15.226796832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:15.226854 containerd[1436]: time="2025-01-17T12:20:15.226807552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:15.227071 containerd[1436]: time="2025-01-17T12:20:15.226901952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:15.242116 kubelet[2553]: E0117 12:20:15.242090 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:15.243048 systemd[1]: Started cri-containerd-42e727421cb00b51ab001c521e224c142029f15e0d47b4c116cb2c2e9a31b661.scope - libcontainer container 42e727421cb00b51ab001c521e224c142029f15e0d47b4c116cb2c2e9a31b661. Jan 17 12:20:15.243593 containerd[1436]: time="2025-01-17T12:20:15.243149758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gx4lj,Uid:a37de252-7ccd-48ab-9ce4-f15fbcea1a68,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:15.267130 containerd[1436]: time="2025-01-17T12:20:15.265188804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:15.267130 containerd[1436]: time="2025-01-17T12:20:15.265252644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:15.267130 containerd[1436]: time="2025-01-17T12:20:15.265268125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:15.267130 containerd[1436]: time="2025-01-17T12:20:15.265537125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:15.273155 containerd[1436]: time="2025-01-17T12:20:15.273113127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7955c75468-7fhxw,Uid:c0270584-604f-4f41-951e-5cbb44836516,Namespace:calico-system,Attempt:0,} returns sandbox id \"42e727421cb00b51ab001c521e224c142029f15e0d47b4c116cb2c2e9a31b661\"" Jan 17 12:20:15.274075 kubelet[2553]: E0117 12:20:15.273878 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:15.274686 containerd[1436]: time="2025-01-17T12:20:15.274662767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:20:15.289029 systemd[1]: Started cri-containerd-6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674.scope - libcontainer container 6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674. Jan 17 12:20:15.308285 containerd[1436]: time="2025-01-17T12:20:15.308231018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gx4lj,Uid:a37de252-7ccd-48ab-9ce4-f15fbcea1a68,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674\"" Jan 17 12:20:15.309069 kubelet[2553]: E0117 12:20:15.308814 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:16.311534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825897211.mount: Deactivated successfully. Jan 17 12:20:16.764644 containerd[1436]: time="2025-01-17T12:20:16.764596379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:16.765672 containerd[1436]: time="2025-01-17T12:20:16.765528620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 17 12:20:16.766385 containerd[1436]: time="2025-01-17T12:20:16.766352140Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:16.769639 containerd[1436]: time="2025-01-17T12:20:16.769499781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:16.770741 containerd[1436]: time="2025-01-17T12:20:16.770658061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.495856134s" Jan 17 12:20:16.770741 containerd[1436]: time="2025-01-17T12:20:16.770714981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 12:20:16.771908 containerd[1436]: time="2025-01-17T12:20:16.771854982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:20:16.788717 containerd[1436]: time="2025-01-17T12:20:16.788677107Z" level=info msg="CreateContainer within sandbox \"42e727421cb00b51ab001c521e224c142029f15e0d47b4c116cb2c2e9a31b661\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:20:16.799566 containerd[1436]: time="2025-01-17T12:20:16.799491630Z" level=info msg="CreateContainer within sandbox \"42e727421cb00b51ab001c521e224c142029f15e0d47b4c116cb2c2e9a31b661\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b558568cafed8545825fde9c14ee5c18f09e21d7e84fbf5794a122643fe956b9\"" Jan 17 12:20:16.800100 containerd[1436]: time="2025-01-17T12:20:16.800063710Z" level=info msg="StartContainer for \"b558568cafed8545825fde9c14ee5c18f09e21d7e84fbf5794a122643fe956b9\"" Jan 17 12:20:16.822013 systemd[1]: Started cri-containerd-b558568cafed8545825fde9c14ee5c18f09e21d7e84fbf5794a122643fe956b9.scope - libcontainer container b558568cafed8545825fde9c14ee5c18f09e21d7e84fbf5794a122643fe956b9. Jan 17 12:20:16.852264 containerd[1436]: time="2025-01-17T12:20:16.852215805Z" level=info msg="StartContainer for \"b558568cafed8545825fde9c14ee5c18f09e21d7e84fbf5794a122643fe956b9\" returns successfully" Jan 17 12:20:17.233281 kubelet[2553]: E0117 12:20:17.233229 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjdnv" podUID="498cd002-4959-4e1e-94d0-79dfca8e8ebe" Jan 17 12:20:17.311787 kubelet[2553]: E0117 12:20:17.311757 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:17.322332 kubelet[2553]: I0117 12:20:17.322248 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7955c75468-7fhxw" podStartSLOduration=1.824937882 podStartE2EDuration="3.322233257s" podCreationTimestamp="2025-01-17 12:20:14 +0000 UTC" firstStartedPulling="2025-01-17 12:20:15.274388527 +0000 UTC m=+24.120449775" lastFinishedPulling="2025-01-17 12:20:16.771683862 +0000 UTC m=+25.617745150" observedRunningTime="2025-01-17 12:20:17.322038697 +0000 UTC m=+26.168099985" watchObservedRunningTime="2025-01-17 12:20:17.322233257 +0000 UTC m=+26.168294465" Jan 17 12:20:17.400627 kubelet[2553]: E0117 12:20:17.400590 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.400627 kubelet[2553]: W0117 12:20:17.400615 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.400773 kubelet[2553]: E0117 12:20:17.400636 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.400894 kubelet[2553]: E0117 12:20:17.400841 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.400894 kubelet[2553]: W0117 12:20:17.400879 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.400894 kubelet[2553]: E0117 12:20:17.400889 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.401082 kubelet[2553]: E0117 12:20:17.401063 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.401082 kubelet[2553]: W0117 12:20:17.401075 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.401133 kubelet[2553]: E0117 12:20:17.401084 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.401284 kubelet[2553]: E0117 12:20:17.401264 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.401284 kubelet[2553]: W0117 12:20:17.401278 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.401336 kubelet[2553]: E0117 12:20:17.401287 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.401481 kubelet[2553]: E0117 12:20:17.401461 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.401481 kubelet[2553]: W0117 12:20:17.401474 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.401530 kubelet[2553]: E0117 12:20:17.401484 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.401693 kubelet[2553]: E0117 12:20:17.401679 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.401693 kubelet[2553]: W0117 12:20:17.401689 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.401761 kubelet[2553]: E0117 12:20:17.401698 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.401868 kubelet[2553]: E0117 12:20:17.401856 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.401894 kubelet[2553]: W0117 12:20:17.401868 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.401894 kubelet[2553]: E0117 12:20:17.401877 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.402057 kubelet[2553]: E0117 12:20:17.402042 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.402057 kubelet[2553]: W0117 12:20:17.402055 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.402116 kubelet[2553]: E0117 12:20:17.402068 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.402234 kubelet[2553]: E0117 12:20:17.402222 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.402234 kubelet[2553]: W0117 12:20:17.402233 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.402285 kubelet[2553]: E0117 12:20:17.402242 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.402387 kubelet[2553]: E0117 12:20:17.402377 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.402422 kubelet[2553]: W0117 12:20:17.402387 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.402422 kubelet[2553]: E0117 12:20:17.402394 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.402541 kubelet[2553]: E0117 12:20:17.402530 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.402566 kubelet[2553]: W0117 12:20:17.402545 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.402566 kubelet[2553]: E0117 12:20:17.402554 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.402684 kubelet[2553]: E0117 12:20:17.402674 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.402712 kubelet[2553]: W0117 12:20:17.402689 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.402712 kubelet[2553]: E0117 12:20:17.402698 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.402835 kubelet[2553]: E0117 12:20:17.402824 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.402876 kubelet[2553]: W0117 12:20:17.402834 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.402876 kubelet[2553]: E0117 12:20:17.402865 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.403038 kubelet[2553]: E0117 12:20:17.403025 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.403038 kubelet[2553]: W0117 12:20:17.403035 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.403095 kubelet[2553]: E0117 12:20:17.403043 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.403216 kubelet[2553]: E0117 12:20:17.403205 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.403242 kubelet[2553]: W0117 12:20:17.403216 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.403242 kubelet[2553]: E0117 12:20:17.403224 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.425624 kubelet[2553]: E0117 12:20:17.425599 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.425624 kubelet[2553]: W0117 12:20:17.425626 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.425749 kubelet[2553]: E0117 12:20:17.425639 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.425857 kubelet[2553]: E0117 12:20:17.425830 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.425857 kubelet[2553]: W0117 12:20:17.425856 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.425940 kubelet[2553]: E0117 12:20:17.425869 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.426067 kubelet[2553]: E0117 12:20:17.426044 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.426067 kubelet[2553]: W0117 12:20:17.426065 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.426130 kubelet[2553]: E0117 12:20:17.426079 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.426342 kubelet[2553]: E0117 12:20:17.426316 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.426342 kubelet[2553]: W0117 12:20:17.426329 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.426342 kubelet[2553]: E0117 12:20:17.426342 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.426523 kubelet[2553]: E0117 12:20:17.426511 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.426557 kubelet[2553]: W0117 12:20:17.426532 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.426588 kubelet[2553]: E0117 12:20:17.426564 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.426771 kubelet[2553]: E0117 12:20:17.426758 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.426771 kubelet[2553]: W0117 12:20:17.426769 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.426834 kubelet[2553]: E0117 12:20:17.426782 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.426995 kubelet[2553]: E0117 12:20:17.426982 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.426995 kubelet[2553]: W0117 12:20:17.426993 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.427068 kubelet[2553]: E0117 12:20:17.427044 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.427283 kubelet[2553]: E0117 12:20:17.427272 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.427283 kubelet[2553]: W0117 12:20:17.427283 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.427406 kubelet[2553]: E0117 12:20:17.427368 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.427538 kubelet[2553]: E0117 12:20:17.427523 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.427538 kubelet[2553]: W0117 12:20:17.427535 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.427605 kubelet[2553]: E0117 12:20:17.427553 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.427716 kubelet[2553]: E0117 12:20:17.427705 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.427716 kubelet[2553]: W0117 12:20:17.427716 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.427770 kubelet[2553]: E0117 12:20:17.427727 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.427900 kubelet[2553]: E0117 12:20:17.427889 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.427900 kubelet[2553]: W0117 12:20:17.427899 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.427961 kubelet[2553]: E0117 12:20:17.427914 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.428104 kubelet[2553]: E0117 12:20:17.428091 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.428137 kubelet[2553]: W0117 12:20:17.428104 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.428137 kubelet[2553]: E0117 12:20:17.428117 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.428285 kubelet[2553]: E0117 12:20:17.428275 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.428320 kubelet[2553]: W0117 12:20:17.428285 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.428320 kubelet[2553]: E0117 12:20:17.428296 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.428471 kubelet[2553]: E0117 12:20:17.428460 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.428471 kubelet[2553]: W0117 12:20:17.428470 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.428600 kubelet[2553]: E0117 12:20:17.428482 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.428663 kubelet[2553]: E0117 12:20:17.428650 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.428699 kubelet[2553]: W0117 12:20:17.428683 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.428723 kubelet[2553]: E0117 12:20:17.428697 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.429292 kubelet[2553]: E0117 12:20:17.429277 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.429292 kubelet[2553]: W0117 12:20:17.429291 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.429396 kubelet[2553]: E0117 12:20:17.429302 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.434663 kubelet[2553]: E0117 12:20:17.434494 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.434663 kubelet[2553]: W0117 12:20:17.434516 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.434663 kubelet[2553]: E0117 12:20:17.434530 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.434912 kubelet[2553]: E0117 12:20:17.434887 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:17.436818 kubelet[2553]: W0117 12:20:17.434961 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:17.436818 kubelet[2553]: E0117 12:20:17.434979 2553 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:17.861031 containerd[1436]: time="2025-01-17T12:20:17.860980366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:17.861594 containerd[1436]: time="2025-01-17T12:20:17.861560686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 17 12:20:17.863143 containerd[1436]: time="2025-01-17T12:20:17.863091446Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:17.865442 containerd[1436]: time="2025-01-17T12:20:17.865397767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:17.867743 containerd[1436]: time="2025-01-17T12:20:17.867627968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.095584786s" Jan 17 12:20:17.867743 containerd[1436]: time="2025-01-17T12:20:17.867661968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 12:20:17.870231 containerd[1436]: time="2025-01-17T12:20:17.870191408Z" level=info msg="CreateContainer within sandbox \"6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:20:17.887123 containerd[1436]: time="2025-01-17T12:20:17.887083693Z" level=info msg="CreateContainer within sandbox \"6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574\"" Jan 17 12:20:17.887731 containerd[1436]: time="2025-01-17T12:20:17.887704453Z" level=info msg="StartContainer for \"9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574\"" Jan 17 12:20:17.913025 systemd[1]: Started cri-containerd-9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574.scope - libcontainer container 9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574. Jan 17 12:20:17.936900 containerd[1436]: time="2025-01-17T12:20:17.936821947Z" level=info msg="StartContainer for \"9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574\" returns successfully" Jan 17 12:20:17.965880 systemd[1]: cri-containerd-9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574.scope: Deactivated successfully. Jan 17 12:20:17.992585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574-rootfs.mount: Deactivated successfully. Jan 17 12:20:18.052021 containerd[1436]: time="2025-01-17T12:20:18.049594697Z" level=info msg="shim disconnected" id=9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574 namespace=k8s.io Jan 17 12:20:18.052021 containerd[1436]: time="2025-01-17T12:20:18.052016458Z" level=warning msg="cleaning up after shim disconnected" id=9bd40eb681549db0c20c0c5ee4fafb6b71ea362c6790c89b89f50e43d9a15574 namespace=k8s.io Jan 17 12:20:18.052231 containerd[1436]: time="2025-01-17T12:20:18.052030738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:18.315863 kubelet[2553]: E0117 12:20:18.314688 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:18.316331 containerd[1436]: time="2025-01-17T12:20:18.316279126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:20:18.317727 kubelet[2553]: I0117 12:20:18.317678 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:20:18.318871 kubelet[2553]: E0117 12:20:18.318327 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:19.234190 kubelet[2553]: E0117 12:20:19.234135 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjdnv" podUID="498cd002-4959-4e1e-94d0-79dfca8e8ebe" Jan 17 12:20:21.234484 kubelet[2553]: E0117 12:20:21.233713 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjdnv" podUID="498cd002-4959-4e1e-94d0-79dfca8e8ebe" Jan 17 12:20:21.714163 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:46006.service - OpenSSH per-connection server daemon (10.0.0.1:46006). Jan 17 12:20:21.759346 sshd[3261]: Accepted publickey for core from 10.0.0.1 port 46006 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:21.761053 sshd[3261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:21.770392 systemd-logind[1420]: New session 8 of user core. Jan 17 12:20:21.777990 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:20:21.915007 sshd[3261]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:21.919025 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:46006.service: Deactivated successfully. Jan 17 12:20:21.920639 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:20:21.922081 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:20:21.923500 systemd-logind[1420]: Removed session 8. Jan 17 12:20:22.044892 containerd[1436]: time="2025-01-17T12:20:22.044740753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:22.045910 containerd[1436]: time="2025-01-17T12:20:22.045833833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 12:20:22.046766 containerd[1436]: time="2025-01-17T12:20:22.046714874Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:22.048922 containerd[1436]: time="2025-01-17T12:20:22.048872674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:22.049960 containerd[1436]: time="2025-01-17T12:20:22.049928474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.733603988s" Jan 17 12:20:22.049960 containerd[1436]: time="2025-01-17T12:20:22.049959594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 12:20:22.052266 containerd[1436]: time="2025-01-17T12:20:22.052215155Z" level=info msg="CreateContainer within sandbox \"6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:20:22.068992 containerd[1436]: time="2025-01-17T12:20:22.068941758Z" level=info msg="CreateContainer within sandbox \"6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5\"" Jan 17 12:20:22.069565 containerd[1436]: time="2025-01-17T12:20:22.069531438Z" level=info msg="StartContainer for \"679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5\"" Jan 17 12:20:22.101056 systemd[1]: Started cri-containerd-679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5.scope - libcontainer container 679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5. Jan 17 12:20:22.136371 containerd[1436]: time="2025-01-17T12:20:22.136318131Z" level=info msg="StartContainer for \"679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5\" returns successfully" Jan 17 12:20:22.323549 kubelet[2553]: E0117 12:20:22.323379 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:22.668617 systemd[1]: cri-containerd-679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5.scope: Deactivated successfully. Jan 17 12:20:22.697974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5-rootfs.mount: Deactivated successfully. Jan 17 12:20:22.758962 kubelet[2553]: I0117 12:20:22.758930 2553 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:20:22.777777 containerd[1436]: time="2025-01-17T12:20:22.777708819Z" level=info msg="shim disconnected" id=679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5 namespace=k8s.io Jan 17 12:20:22.777777 containerd[1436]: time="2025-01-17T12:20:22.777766059Z" level=warning msg="cleaning up after shim disconnected" id=679232cce4c9820c4641afec9263703fb23376ee38a91977576f81def3b8f3a5 namespace=k8s.io Jan 17 12:20:22.777777 containerd[1436]: time="2025-01-17T12:20:22.777775619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:22.784386 kubelet[2553]: I0117 12:20:22.784328 2553 topology_manager.go:215] "Topology Admit Handler" podUID="8508a90b-1cd0-4874-986a-bb0a86ee7bc2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4x77z" Jan 17 12:20:22.785588 kubelet[2553]: I0117 12:20:22.784677 2553 topology_manager.go:215] "Topology Admit Handler" podUID="4dfb0df8-6e80-4aa6-8102-624ee7561a47" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g2d5f" Jan 17 12:20:22.788157 kubelet[2553]: I0117 12:20:22.787990 2553 topology_manager.go:215] "Topology Admit Handler" podUID="29d30ae5-d85e-4d42-ab11-9579ef57e019" podNamespace="calico-system" podName="calico-kube-controllers-5c77d568d8-sdh2z" Jan 17 12:20:22.790346 kubelet[2553]: I0117 12:20:22.790105 2553 topology_manager.go:215] "Topology Admit Handler" podUID="f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0" podNamespace="calico-apiserver" podName="calico-apiserver-5bc4c76444-98hnf" Jan 17 12:20:22.791113 kubelet[2553]: I0117 12:20:22.790824 2553 topology_manager.go:215] "Topology Admit Handler" podUID="f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9" podNamespace="calico-apiserver" podName="calico-apiserver-5bc4c76444-htl4l" Jan 17 12:20:22.794276 systemd[1]: Created slice kubepods-burstable-pod8508a90b_1cd0_4874_986a_bb0a86ee7bc2.slice - libcontainer container kubepods-burstable-pod8508a90b_1cd0_4874_986a_bb0a86ee7bc2.slice. Jan 17 12:20:22.806037 systemd[1]: Created slice kubepods-burstable-pod4dfb0df8_6e80_4aa6_8102_624ee7561a47.slice - libcontainer container kubepods-burstable-pod4dfb0df8_6e80_4aa6_8102_624ee7561a47.slice. Jan 17 12:20:22.813098 systemd[1]: Created slice kubepods-besteffort-pod29d30ae5_d85e_4d42_ab11_9579ef57e019.slice - libcontainer container kubepods-besteffort-pod29d30ae5_d85e_4d42_ab11_9579ef57e019.slice. Jan 17 12:20:22.820191 systemd[1]: Created slice kubepods-besteffort-podf6b4b3f1_b354_4daf_b595_1f9fa5ab20a0.slice - libcontainer container kubepods-besteffort-podf6b4b3f1_b354_4daf_b595_1f9fa5ab20a0.slice. Jan 17 12:20:22.825783 systemd[1]: Created slice kubepods-besteffort-podf5f42a5a_0912_41d4_9a34_4a6f23b3d2e9.slice - libcontainer container kubepods-besteffort-podf5f42a5a_0912_41d4_9a34_4a6f23b3d2e9.slice. Jan 17 12:20:22.860943 kubelet[2553]: I0117 12:20:22.860899 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d30ae5-d85e-4d42-ab11-9579ef57e019-tigera-ca-bundle\") pod \"calico-kube-controllers-5c77d568d8-sdh2z\" (UID: \"29d30ae5-d85e-4d42-ab11-9579ef57e019\") " pod="calico-system/calico-kube-controllers-5c77d568d8-sdh2z" Jan 17 12:20:22.860943 kubelet[2553]: I0117 12:20:22.860947 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvgn\" (UniqueName: \"kubernetes.io/projected/f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0-kube-api-access-bxvgn\") pod \"calico-apiserver-5bc4c76444-98hnf\" (UID: \"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0\") " pod="calico-apiserver/calico-apiserver-5bc4c76444-98hnf" Jan 17 12:20:22.861340 kubelet[2553]: I0117 12:20:22.860975 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dfb0df8-6e80-4aa6-8102-624ee7561a47-config-volume\") pod \"coredns-7db6d8ff4d-g2d5f\" (UID: \"4dfb0df8-6e80-4aa6-8102-624ee7561a47\") " pod="kube-system/coredns-7db6d8ff4d-g2d5f" Jan 17 12:20:22.861340 kubelet[2553]: I0117 12:20:22.860997 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8508a90b-1cd0-4874-986a-bb0a86ee7bc2-config-volume\") pod \"coredns-7db6d8ff4d-4x77z\" (UID: \"8508a90b-1cd0-4874-986a-bb0a86ee7bc2\") " pod="kube-system/coredns-7db6d8ff4d-4x77z" Jan 17 12:20:22.861340 kubelet[2553]: I0117 12:20:22.861153 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pqrw\" (UniqueName: \"kubernetes.io/projected/8508a90b-1cd0-4874-986a-bb0a86ee7bc2-kube-api-access-6pqrw\") pod \"coredns-7db6d8ff4d-4x77z\" (UID: \"8508a90b-1cd0-4874-986a-bb0a86ee7bc2\") " pod="kube-system/coredns-7db6d8ff4d-4x77z" Jan 17 12:20:22.861996 kubelet[2553]: I0117 12:20:22.861753 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlj92\" (UniqueName: \"kubernetes.io/projected/29d30ae5-d85e-4d42-ab11-9579ef57e019-kube-api-access-zlj92\") pod \"calico-kube-controllers-5c77d568d8-sdh2z\" (UID: \"29d30ae5-d85e-4d42-ab11-9579ef57e019\") " pod="calico-system/calico-kube-controllers-5c77d568d8-sdh2z" Jan 17 12:20:22.861996 kubelet[2553]: I0117 12:20:22.861809 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfkcm\" (UniqueName: \"kubernetes.io/projected/4dfb0df8-6e80-4aa6-8102-624ee7561a47-kube-api-access-bfkcm\") pod \"coredns-7db6d8ff4d-g2d5f\" (UID: \"4dfb0df8-6e80-4aa6-8102-624ee7561a47\") " pod="kube-system/coredns-7db6d8ff4d-g2d5f" Jan 17 12:20:22.861996 kubelet[2553]: I0117 12:20:22.861828 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9-calico-apiserver-certs\") pod \"calico-apiserver-5bc4c76444-htl4l\" (UID: \"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9\") " pod="calico-apiserver/calico-apiserver-5bc4c76444-htl4l" Jan 17 12:20:22.862262 kubelet[2553]: I0117 12:20:22.862167 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2n6\" (UniqueName: \"kubernetes.io/projected/f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9-kube-api-access-cf2n6\") pod \"calico-apiserver-5bc4c76444-htl4l\" (UID: \"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9\") " pod="calico-apiserver/calico-apiserver-5bc4c76444-htl4l" Jan 17 12:20:22.862262 kubelet[2553]: I0117 12:20:22.862212 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0-calico-apiserver-certs\") pod \"calico-apiserver-5bc4c76444-98hnf\" (UID: \"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0\") " pod="calico-apiserver/calico-apiserver-5bc4c76444-98hnf" Jan 17 12:20:23.098532 kubelet[2553]: E0117 12:20:23.098123 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:23.100018 containerd[1436]: time="2025-01-17T12:20:23.099903122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4x77z,Uid:8508a90b-1cd0-4874-986a-bb0a86ee7bc2,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:23.109819 kubelet[2553]: E0117 12:20:23.109782 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:23.110986 containerd[1436]: time="2025-01-17T12:20:23.110241804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g2d5f,Uid:4dfb0df8-6e80-4aa6-8102-624ee7561a47,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:23.117159 containerd[1436]: time="2025-01-17T12:20:23.116981246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c77d568d8-sdh2z,Uid:29d30ae5-d85e-4d42-ab11-9579ef57e019,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:23.123898 containerd[1436]: time="2025-01-17T12:20:23.123805327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-98hnf,Uid:f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:20:23.130211 containerd[1436]: time="2025-01-17T12:20:23.130161608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-htl4l,Uid:f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:20:23.259098 systemd[1]: Created slice kubepods-besteffort-pod498cd002_4959_4e1e_94d0_79dfca8e8ebe.slice - libcontainer container kubepods-besteffort-pod498cd002_4959_4e1e_94d0_79dfca8e8ebe.slice. Jan 17 12:20:23.272095 containerd[1436]: time="2025-01-17T12:20:23.269151834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjdnv,Uid:498cd002-4959-4e1e-94d0-79dfca8e8ebe,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:23.333270 kubelet[2553]: E0117 12:20:23.333237 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:23.334109 containerd[1436]: time="2025-01-17T12:20:23.334046646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:20:23.533165 containerd[1436]: time="2025-01-17T12:20:23.532970883Z" level=error msg="Failed to destroy network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.533165 containerd[1436]: time="2025-01-17T12:20:23.533132363Z" level=error msg="Failed to destroy network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.533431 containerd[1436]: time="2025-01-17T12:20:23.533389644Z" level=error msg="encountered an error cleaning up failed sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.533475 containerd[1436]: time="2025-01-17T12:20:23.533454684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c77d568d8-sdh2z,Uid:29d30ae5-d85e-4d42-ab11-9579ef57e019,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.533825 containerd[1436]: time="2025-01-17T12:20:23.533747644Z" level=error msg="encountered an error cleaning up failed sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.533825 containerd[1436]: time="2025-01-17T12:20:23.533794004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjdnv,Uid:498cd002-4959-4e1e-94d0-79dfca8e8ebe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.535433 containerd[1436]: time="2025-01-17T12:20:23.535298084Z" level=error msg="Failed to destroy network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.535816 containerd[1436]: time="2025-01-17T12:20:23.535713644Z" level=error msg="encountered an error cleaning up failed sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.535816 containerd[1436]: time="2025-01-17T12:20:23.535762524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g2d5f,Uid:4dfb0df8-6e80-4aa6-8102-624ee7561a47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.535930 kubelet[2553]: E0117 12:20:23.535763 2553 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.536068 kubelet[2553]: E0117 12:20:23.536014 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hjdnv" Jan 17 12:20:23.536115 kubelet[2553]: E0117 12:20:23.536061 2553 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.536115 kubelet[2553]: E0117 12:20:23.536075 2553 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hjdnv" Jan 17 12:20:23.536115 kubelet[2553]: E0117 12:20:23.536100 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g2d5f" Jan 17 12:20:23.536188 kubelet[2553]: E0117 12:20:23.536125 2553 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g2d5f" Jan 17 12:20:23.536188 kubelet[2553]: E0117 12:20:23.536155 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-g2d5f_kube-system(4dfb0df8-6e80-4aa6-8102-624ee7561a47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-g2d5f_kube-system(4dfb0df8-6e80-4aa6-8102-624ee7561a47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-g2d5f" podUID="4dfb0df8-6e80-4aa6-8102-624ee7561a47" Jan 17 12:20:23.536379 containerd[1436]: time="2025-01-17T12:20:23.536276924Z" level=error msg="Failed to destroy network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.536749 kubelet[2553]: E0117 12:20:23.536126 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hjdnv_calico-system(498cd002-4959-4e1e-94d0-79dfca8e8ebe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hjdnv_calico-system(498cd002-4959-4e1e-94d0-79dfca8e8ebe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hjdnv" podUID="498cd002-4959-4e1e-94d0-79dfca8e8ebe" Jan 17 12:20:23.536837 containerd[1436]: time="2025-01-17T12:20:23.536625684Z" level=error msg="encountered an error cleaning up failed sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.536837 containerd[1436]: time="2025-01-17T12:20:23.536667444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-98hnf,Uid:f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.536943 kubelet[2553]: E0117 12:20:23.536795 2553 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.536943 kubelet[2553]: E0117 12:20:23.536872 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc4c76444-98hnf" Jan 17 12:20:23.536943 kubelet[2553]: E0117 12:20:23.536887 2553 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc4c76444-98hnf" Jan 17 12:20:23.537079 kubelet[2553]: E0117 12:20:23.536915 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bc4c76444-98hnf_calico-apiserver(f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bc4c76444-98hnf_calico-apiserver(f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc4c76444-98hnf" podUID="f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0" Jan 17 12:20:23.537079 kubelet[2553]: E0117 12:20:23.535761 2553 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.537079 kubelet[2553]: E0117 12:20:23.537062 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c77d568d8-sdh2z" Jan 17 12:20:23.537175 kubelet[2553]: E0117 12:20:23.537084 2553 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c77d568d8-sdh2z" Jan 17 12:20:23.537175 kubelet[2553]: E0117 12:20:23.537118 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c77d568d8-sdh2z_calico-system(29d30ae5-d85e-4d42-ab11-9579ef57e019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c77d568d8-sdh2z_calico-system(29d30ae5-d85e-4d42-ab11-9579ef57e019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c77d568d8-sdh2z" podUID="29d30ae5-d85e-4d42-ab11-9579ef57e019" Jan 17 12:20:23.537419 containerd[1436]: time="2025-01-17T12:20:23.537309844Z" level=error msg="Failed to destroy network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.538287 containerd[1436]: time="2025-01-17T12:20:23.538225884Z" level=error msg="encountered an error cleaning up failed sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.538960 containerd[1436]: time="2025-01-17T12:20:23.538857365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-htl4l,Uid:f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.539166 kubelet[2553]: E0117 12:20:23.539061 2553 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.539491 kubelet[2553]: E0117 12:20:23.539190 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc4c76444-htl4l" Jan 17 12:20:23.539491 kubelet[2553]: E0117 12:20:23.539214 2553 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bc4c76444-htl4l" Jan 17 12:20:23.539491 kubelet[2553]: E0117 12:20:23.539319 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bc4c76444-htl4l_calico-apiserver(f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bc4c76444-htl4l_calico-apiserver(f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc4c76444-htl4l" podUID="f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9" Jan 17 12:20:23.539665 containerd[1436]: time="2025-01-17T12:20:23.539308885Z" level=error msg="Failed to destroy network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.540094 containerd[1436]: time="2025-01-17T12:20:23.540061365Z" level=error msg="encountered an error cleaning up failed sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.540976 containerd[1436]: time="2025-01-17T12:20:23.540926165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4x77z,Uid:8508a90b-1cd0-4874-986a-bb0a86ee7bc2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.541182 kubelet[2553]: E0117 12:20:23.541124 2553 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:23.541228 kubelet[2553]: E0117 12:20:23.541187 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4x77z" Jan 17 12:20:23.541228 kubelet[2553]: E0117 12:20:23.541206 2553 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4x77z" Jan 17 12:20:23.541278 kubelet[2553]: E0117 12:20:23.541252 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4x77z_kube-system(8508a90b-1cd0-4874-986a-bb0a86ee7bc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4x77z_kube-system(8508a90b-1cd0-4874-986a-bb0a86ee7bc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4x77z" podUID="8508a90b-1cd0-4874-986a-bb0a86ee7bc2" Jan 17 12:20:24.067556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47-shm.mount: Deactivated successfully. Jan 17 12:20:24.067654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61-shm.mount: Deactivated successfully. Jan 17 12:20:24.335923 kubelet[2553]: I0117 12:20:24.335802 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:24.338572 containerd[1436]: time="2025-01-17T12:20:24.338037790Z" level=info msg="StopPodSandbox for \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\"" Jan 17 12:20:24.338914 kubelet[2553]: I0117 12:20:24.338387 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:24.338965 containerd[1436]: time="2025-01-17T12:20:24.338907670Z" level=info msg="Ensure that sandbox 648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47 in task-service has been cleanup successfully" Jan 17 12:20:24.340231 containerd[1436]: time="2025-01-17T12:20:24.339024630Z" level=info msg="StopPodSandbox for \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\"" Jan 17 12:20:24.340231 containerd[1436]: time="2025-01-17T12:20:24.339869550Z" level=info msg="Ensure that sandbox a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3 in task-service has been cleanup successfully" Jan 17 12:20:24.341484 kubelet[2553]: I0117 12:20:24.341443 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:24.342091 containerd[1436]: time="2025-01-17T12:20:24.342055351Z" level=info msg="StopPodSandbox for \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\"" Jan 17 12:20:24.342986 containerd[1436]: time="2025-01-17T12:20:24.342935871Z" level=info msg="Ensure that sandbox 21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1 in task-service has been cleanup successfully" Jan 17 12:20:24.345800 kubelet[2553]: I0117 12:20:24.345761 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:24.347995 containerd[1436]: time="2025-01-17T12:20:24.347965832Z" level=info msg="StopPodSandbox for \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\"" Jan 17 12:20:24.348271 containerd[1436]: time="2025-01-17T12:20:24.348248272Z" level=info msg="Ensure that sandbox ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1 in task-service has been cleanup successfully" Jan 17 12:20:24.349430 kubelet[2553]: I0117 12:20:24.349386 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:24.350210 containerd[1436]: time="2025-01-17T12:20:24.350000192Z" level=info msg="StopPodSandbox for \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\"" Jan 17 12:20:24.350571 containerd[1436]: time="2025-01-17T12:20:24.350533072Z" level=info msg="Ensure that sandbox bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc in task-service has been cleanup successfully" Jan 17 12:20:24.352497 kubelet[2553]: I0117 12:20:24.352463 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:24.354072 containerd[1436]: time="2025-01-17T12:20:24.353693273Z" level=info msg="StopPodSandbox for \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\"" Jan 17 12:20:24.354072 containerd[1436]: time="2025-01-17T12:20:24.353864593Z" level=info msg="Ensure that sandbox 881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61 in task-service has been cleanup successfully" Jan 17 12:20:24.382646 containerd[1436]: time="2025-01-17T12:20:24.382586998Z" level=error msg="StopPodSandbox for \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\" failed" error="failed to destroy network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:24.382886 kubelet[2553]: E0117 12:20:24.382836 2553 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:24.382945 kubelet[2553]: E0117 12:20:24.382906 2553 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47"} Jan 17 12:20:24.382980 kubelet[2553]: E0117 12:20:24.382964 2553 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4dfb0df8-6e80-4aa6-8102-624ee7561a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:24.383047 kubelet[2553]: E0117 12:20:24.382984 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4dfb0df8-6e80-4aa6-8102-624ee7561a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-g2d5f" podUID="4dfb0df8-6e80-4aa6-8102-624ee7561a47" Jan 17 12:20:24.384146 containerd[1436]: time="2025-01-17T12:20:24.384098958Z" level=error msg="StopPodSandbox for \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\" failed" error="failed to destroy network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:24.384550 kubelet[2553]: E0117 12:20:24.384513 2553 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:24.384625 kubelet[2553]: E0117 12:20:24.384559 2553 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3"} Jan 17 12:20:24.384625 kubelet[2553]: E0117 12:20:24.384594 2553 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:24.384625 kubelet[2553]: E0117 12:20:24.384614 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc4c76444-htl4l" podUID="f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9" Jan 17 12:20:24.394298 containerd[1436]: time="2025-01-17T12:20:24.394235160Z" level=error msg="StopPodSandbox for \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\" failed" error="failed to destroy network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:24.394543 kubelet[2553]: E0117 12:20:24.394501 2553 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:24.394610 kubelet[2553]: E0117 12:20:24.394551 2553 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61"} Jan 17 12:20:24.394610 kubelet[2553]: E0117 12:20:24.394585 2553 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8508a90b-1cd0-4874-986a-bb0a86ee7bc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:24.394687 kubelet[2553]: E0117 12:20:24.394627 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8508a90b-1cd0-4874-986a-bb0a86ee7bc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4x77z" podUID="8508a90b-1cd0-4874-986a-bb0a86ee7bc2" Jan 17 12:20:24.396920 containerd[1436]: time="2025-01-17T12:20:24.396870760Z" level=error msg="StopPodSandbox for \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\" failed" error="failed to destroy network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:24.397216 kubelet[2553]: E0117 12:20:24.397175 2553 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:24.397500 kubelet[2553]: E0117 12:20:24.397472 2553 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1"} Jan 17 12:20:24.397532 kubelet[2553]: E0117 12:20:24.397517 2553 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:24.397590 kubelet[2553]: E0117 12:20:24.397543 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bc4c76444-98hnf" podUID="f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0" Jan 17 12:20:24.398722 containerd[1436]: time="2025-01-17T12:20:24.398679321Z" level=error msg="StopPodSandbox for \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\" failed" error="failed to destroy network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:24.400096 kubelet[2553]: E0117 12:20:24.400063 2553 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:24.400298 kubelet[2553]: E0117 12:20:24.400215 2553 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1"} Jan 17 12:20:24.400298 kubelet[2553]: E0117 12:20:24.400252 2553 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29d30ae5-d85e-4d42-ab11-9579ef57e019\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:24.400298 kubelet[2553]: E0117 12:20:24.400273 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29d30ae5-d85e-4d42-ab11-9579ef57e019\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c77d568d8-sdh2z" podUID="29d30ae5-d85e-4d42-ab11-9579ef57e019" Jan 17 12:20:24.401999 containerd[1436]: time="2025-01-17T12:20:24.401959401Z" level=error msg="StopPodSandbox for \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\" failed" error="failed to destroy network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:24.402175 kubelet[2553]: E0117 12:20:24.402137 2553 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:24.402217 kubelet[2553]: E0117 12:20:24.402176 2553 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc"} Jan 17 12:20:24.402217 kubelet[2553]: E0117 12:20:24.402204 2553 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"498cd002-4959-4e1e-94d0-79dfca8e8ebe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:24.402281 kubelet[2553]: E0117 12:20:24.402222 2553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"498cd002-4959-4e1e-94d0-79dfca8e8ebe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hjdnv" podUID="498cd002-4959-4e1e-94d0-79dfca8e8ebe" Jan 17 12:20:25.162199 kubelet[2553]: I0117 12:20:25.162151 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:20:25.162941 kubelet[2553]: E0117 12:20:25.162914 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:25.355342 kubelet[2553]: E0117 12:20:25.355292 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:26.948198 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:33628.service - OpenSSH per-connection server daemon (10.0.0.1:33628). Jan 17 12:20:26.987026 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 33628 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:26.988626 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:26.993447 systemd-logind[1420]: New session 9 of user core. Jan 17 12:20:27.001039 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:20:27.135709 sshd[3709]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:27.140273 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:20:27.141114 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:33628.service: Deactivated successfully. Jan 17 12:20:27.143979 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:20:27.146141 systemd-logind[1420]: Removed session 9. Jan 17 12:20:27.404338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231501794.mount: Deactivated successfully. Jan 17 12:20:27.757829 containerd[1436]: time="2025-01-17T12:20:27.757702814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:27.758535 containerd[1436]: time="2025-01-17T12:20:27.758498894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:20:27.759954 containerd[1436]: time="2025-01-17T12:20:27.759903254Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:27.761614 containerd[1436]: time="2025-01-17T12:20:27.761582655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:27.762297 containerd[1436]: time="2025-01-17T12:20:27.762213015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.428121689s" Jan 17 12:20:27.762297 containerd[1436]: time="2025-01-17T12:20:27.762248455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:20:27.770012 containerd[1436]: time="2025-01-17T12:20:27.769967416Z" level=info msg="CreateContainer within sandbox \"6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:20:27.784666 containerd[1436]: time="2025-01-17T12:20:27.784606778Z" level=info msg="CreateContainer within sandbox \"6a5fa848fb6e433dcd9f5b3716b9b73f283ba608d1e91b37a5e22ede58bef674\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c336347a1d7e23d30361747ace432d739bbce99a222dd01b783029ea6237d457\"" Jan 17 12:20:27.785228 containerd[1436]: time="2025-01-17T12:20:27.785201978Z" level=info msg="StartContainer for \"c336347a1d7e23d30361747ace432d739bbce99a222dd01b783029ea6237d457\"" Jan 17 12:20:27.844055 systemd[1]: Started cri-containerd-c336347a1d7e23d30361747ace432d739bbce99a222dd01b783029ea6237d457.scope - libcontainer container c336347a1d7e23d30361747ace432d739bbce99a222dd01b783029ea6237d457. Jan 17 12:20:27.915156 containerd[1436]: time="2025-01-17T12:20:27.915108877Z" level=info msg="StartContainer for \"c336347a1d7e23d30361747ace432d739bbce99a222dd01b783029ea6237d457\" returns successfully" Jan 17 12:20:28.031885 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:20:28.032014 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:20:28.367200 kubelet[2553]: E0117 12:20:28.367166 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:29.369645 kubelet[2553]: E0117 12:20:29.369616 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:29.441884 kernel: bpftool[3964]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:20:29.601750 systemd-networkd[1375]: vxlan.calico: Link UP Jan 17 12:20:29.601760 systemd-networkd[1375]: vxlan.calico: Gained carrier Jan 17 12:20:30.371157 kubelet[2553]: E0117 12:20:30.371125 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:30.883989 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Jan 17 12:20:32.149959 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:33640.service - OpenSSH per-connection server daemon (10.0.0.1:33640). Jan 17 12:20:32.193667 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 33640 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:32.195303 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:32.199056 systemd-logind[1420]: New session 10 of user core. Jan 17 12:20:32.218027 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:20:32.345010 sshd[4064]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:32.355577 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:33640.service: Deactivated successfully. Jan 17 12:20:32.358479 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:20:32.360476 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:20:32.371158 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:33646.service - OpenSSH per-connection server daemon (10.0.0.1:33646). Jan 17 12:20:32.372811 systemd-logind[1420]: Removed session 10. Jan 17 12:20:32.402721 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 33646 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:32.404318 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:32.408691 systemd-logind[1420]: New session 11 of user core. Jan 17 12:20:32.417034 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:20:32.596594 sshd[4080]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:32.606801 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:33646.service: Deactivated successfully. Jan 17 12:20:32.610517 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:20:32.613297 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:20:32.619404 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:53422.service - OpenSSH per-connection server daemon (10.0.0.1:53422). Jan 17 12:20:32.622591 systemd-logind[1420]: Removed session 11. Jan 17 12:20:32.653784 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 53422 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:32.655371 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:32.659343 systemd-logind[1420]: New session 12 of user core. Jan 17 12:20:32.670067 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:20:32.788522 sshd[4093]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:32.792087 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:53422.service: Deactivated successfully. Jan 17 12:20:32.794156 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:20:32.795146 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:20:32.796154 systemd-logind[1420]: Removed session 12. Jan 17 12:20:37.235026 containerd[1436]: time="2025-01-17T12:20:37.234980262Z" level=info msg="StopPodSandbox for \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\"" Jan 17 12:20:37.244214 containerd[1436]: time="2025-01-17T12:20:37.244102222Z" level=info msg="StopPodSandbox for \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\"" Jan 17 12:20:37.324928 kubelet[2553]: I0117 12:20:37.324867 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gx4lj" podStartSLOduration=10.871192951 podStartE2EDuration="23.324837228s" podCreationTimestamp="2025-01-17 12:20:14 +0000 UTC" firstStartedPulling="2025-01-17 12:20:15.309495178 +0000 UTC m=+24.155556386" lastFinishedPulling="2025-01-17 12:20:27.763139375 +0000 UTC m=+36.609200663" observedRunningTime="2025-01-17 12:20:28.394165062 +0000 UTC m=+37.240226310" watchObservedRunningTime="2025-01-17 12:20:37.324837228 +0000 UTC m=+46.170898476" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.325 [INFO][4157] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.326 [INFO][4157] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" iface="eth0" netns="/var/run/netns/cni-94f469b9-2d6c-85de-ab10-34bd5c459d40" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.327 [INFO][4157] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" iface="eth0" netns="/var/run/netns/cni-94f469b9-2d6c-85de-ab10-34bd5c459d40" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.332 [INFO][4157] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" iface="eth0" netns="/var/run/netns/cni-94f469b9-2d6c-85de-ab10-34bd5c459d40" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.332 [INFO][4157] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.332 [INFO][4157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.426 [INFO][4167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.426 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.426 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.436 [WARNING][4167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.436 [INFO][4167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.438 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:37.444148 containerd[1436]: 2025-01-17 12:20:37.442 [INFO][4157] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:37.445058 containerd[1436]: time="2025-01-17T12:20:37.444954438Z" level=info msg="TearDown network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\" successfully" Jan 17 12:20:37.445058 containerd[1436]: time="2025-01-17T12:20:37.444985558Z" level=info msg="StopPodSandbox for \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\" returns successfully" Jan 17 12:20:37.445818 containerd[1436]: time="2025-01-17T12:20:37.445789118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c77d568d8-sdh2z,Uid:29d30ae5-d85e-4d42-ab11-9579ef57e019,Namespace:calico-system,Attempt:1,}" Jan 17 12:20:37.446263 systemd[1]: run-netns-cni\x2d94f469b9\x2d2d6c\x2d85de\x2dab10\x2d34bd5c459d40.mount: Deactivated successfully. Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.326 [INFO][4143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.326 [INFO][4143] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" iface="eth0" netns="/var/run/netns/cni-75c997a6-5dc9-08da-d1a4-513b54de60bc" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.327 [INFO][4143] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" iface="eth0" netns="/var/run/netns/cni-75c997a6-5dc9-08da-d1a4-513b54de60bc" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.332 [INFO][4143] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" iface="eth0" netns="/var/run/netns/cni-75c997a6-5dc9-08da-d1a4-513b54de60bc" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.332 [INFO][4143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.332 [INFO][4143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.426 [INFO][4166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.426 [INFO][4166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.438 [INFO][4166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.448 [WARNING][4166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.449 [INFO][4166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.450 [INFO][4166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:37.454296 containerd[1436]: 2025-01-17 12:20:37.452 [INFO][4143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:37.456150 containerd[1436]: time="2025-01-17T12:20:37.454379038Z" level=info msg="TearDown network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\" successfully" Jan 17 12:20:37.456150 containerd[1436]: time="2025-01-17T12:20:37.454402758Z" level=info msg="StopPodSandbox for \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\" returns successfully" Jan 17 12:20:37.456150 containerd[1436]: time="2025-01-17T12:20:37.455448958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjdnv,Uid:498cd002-4959-4e1e-94d0-79dfca8e8ebe,Namespace:calico-system,Attempt:1,}" Jan 17 12:20:37.456177 systemd[1]: run-netns-cni\x2d75c997a6\x2d5dc9\x2d08da\x2dd1a4\x2d513b54de60bc.mount: Deactivated successfully. Jan 17 12:20:37.576930 systemd-networkd[1375]: caliddfa380f802: Link UP Jan 17 12:20:37.577113 systemd-networkd[1375]: caliddfa380f802: Gained carrier Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.504 [INFO][4181] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0 calico-kube-controllers-5c77d568d8- calico-system 29d30ae5-d85e-4d42-ab11-9579ef57e019 965 0 2025-01-17 12:20:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c77d568d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c77d568d8-sdh2z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliddfa380f802 [] []}} ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.504 [INFO][4181] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.532 [INFO][4208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" HandleID="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.547 [INFO][4208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" HandleID="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f39e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c77d568d8-sdh2z", "timestamp":"2025-01-17 12:20:37.532660004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.547 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.547 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.547 [INFO][4208] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.549 [INFO][4208] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.554 [INFO][4208] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.558 [INFO][4208] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.560 [INFO][4208] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.561 [INFO][4208] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.561 [INFO][4208] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.563 [INFO][4208] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27 Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.566 [INFO][4208] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.570 [INFO][4208] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.570 [INFO][4208] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" host="localhost" Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.570 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:37.588481 containerd[1436]: 2025-01-17 12:20:37.570 [INFO][4208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" HandleID="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.589329 containerd[1436]: 2025-01-17 12:20:37.573 [INFO][4181] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0", GenerateName:"calico-kube-controllers-5c77d568d8-", Namespace:"calico-system", SelfLink:"", UID:"29d30ae5-d85e-4d42-ab11-9579ef57e019", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c77d568d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c77d568d8-sdh2z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddfa380f802", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:37.589329 containerd[1436]: 2025-01-17 12:20:37.573 [INFO][4181] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.589329 containerd[1436]: 2025-01-17 12:20:37.574 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddfa380f802 ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.589329 containerd[1436]: 2025-01-17 12:20:37.576 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.589329 containerd[1436]: 2025-01-17 12:20:37.576 [INFO][4181] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0", GenerateName:"calico-kube-controllers-5c77d568d8-", Namespace:"calico-system", SelfLink:"", UID:"29d30ae5-d85e-4d42-ab11-9579ef57e019", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c77d568d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27", Pod:"calico-kube-controllers-5c77d568d8-sdh2z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddfa380f802", MAC:"22:a9:6d:c3:c7:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:37.589329 containerd[1436]: 2025-01-17 12:20:37.586 [INFO][4181] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Namespace="calico-system" Pod="calico-kube-controllers-5c77d568d8-sdh2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:37.606838 systemd-networkd[1375]: cali13c5d7e8272: Link UP Jan 17 12:20:37.607155 systemd-networkd[1375]: cali13c5d7e8272: Gained carrier Jan 17 12:20:37.614895 containerd[1436]: time="2025-01-17T12:20:37.614798330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:37.614895 containerd[1436]: time="2025-01-17T12:20:37.614876970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:37.615124 containerd[1436]: time="2025-01-17T12:20:37.614891490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:37.615124 containerd[1436]: time="2025-01-17T12:20:37.614977090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.506 [INFO][4192] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hjdnv-eth0 csi-node-driver- calico-system 498cd002-4959-4e1e-94d0-79dfca8e8ebe 966 0 2025-01-17 12:20:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hjdnv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali13c5d7e8272 [] []}} ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.506 [INFO][4192] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.532 [INFO][4210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" HandleID="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.549 [INFO][4210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" HandleID="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hjdnv", "timestamp":"2025-01-17 12:20:37.532950124 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.550 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.570 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.570 [INFO][4210] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.572 [INFO][4210] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.578 [INFO][4210] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.582 [INFO][4210] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.586 [INFO][4210] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.590 [INFO][4210] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.590 [INFO][4210] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.592 [INFO][4210] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3 Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.596 [INFO][4210] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.602 [INFO][4210] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.602 [INFO][4210] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" host="localhost" Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.602 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:37.619868 containerd[1436]: 2025-01-17 12:20:37.602 [INFO][4210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" HandleID="k8s-pod-network.680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.620352 containerd[1436]: 2025-01-17 12:20:37.604 [INFO][4192] cni-plugin/k8s.go 386: Populated endpoint ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hjdnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"498cd002-4959-4e1e-94d0-79dfca8e8ebe", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hjdnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13c5d7e8272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:37.620352 containerd[1436]: 2025-01-17 12:20:37.604 [INFO][4192] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.620352 containerd[1436]: 2025-01-17 12:20:37.604 [INFO][4192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13c5d7e8272 ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.620352 containerd[1436]: 2025-01-17 12:20:37.606 [INFO][4192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.620352 containerd[1436]: 2025-01-17 12:20:37.607 [INFO][4192] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hjdnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"498cd002-4959-4e1e-94d0-79dfca8e8ebe", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3", Pod:"csi-node-driver-hjdnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13c5d7e8272", MAC:"de:43:58:39:49:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:37.620352 containerd[1436]: 2025-01-17 12:20:37.616 [INFO][4192] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3" Namespace="calico-system" Pod="csi-node-driver-hjdnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:37.633025 systemd[1]: Started cri-containerd-2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27.scope - libcontainer container 2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27. Jan 17 12:20:37.637788 containerd[1436]: time="2025-01-17T12:20:37.637641812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:37.637788 containerd[1436]: time="2025-01-17T12:20:37.637721092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:37.637788 containerd[1436]: time="2025-01-17T12:20:37.637733172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:37.638029 containerd[1436]: time="2025-01-17T12:20:37.637818132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:37.645478 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:37.655001 systemd[1]: Started cri-containerd-680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3.scope - libcontainer container 680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3. Jan 17 12:20:37.663887 containerd[1436]: time="2025-01-17T12:20:37.663834094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c77d568d8-sdh2z,Uid:29d30ae5-d85e-4d42-ab11-9579ef57e019,Namespace:calico-system,Attempt:1,} returns sandbox id \"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27\"" Jan 17 12:20:37.665548 containerd[1436]: time="2025-01-17T12:20:37.665516934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:20:37.666193 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:37.674642 containerd[1436]: time="2025-01-17T12:20:37.674612415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjdnv,Uid:498cd002-4959-4e1e-94d0-79dfca8e8ebe,Namespace:calico-system,Attempt:1,} returns sandbox id \"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3\"" Jan 17 12:20:37.802294 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:53426.service - OpenSSH per-connection server daemon (10.0.0.1:53426). Jan 17 12:20:37.844066 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 53426 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:37.846571 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:37.850691 systemd-logind[1420]: New session 13 of user core. Jan 17 12:20:37.859993 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:20:38.003687 sshd[4338]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:38.007563 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:53426.service: Deactivated successfully. Jan 17 12:20:38.009179 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:20:38.010468 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:20:38.011667 systemd-logind[1420]: Removed session 13. Jan 17 12:20:38.234379 containerd[1436]: time="2025-01-17T12:20:38.234331136Z" level=info msg="StopPodSandbox for \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\"" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.277 [INFO][4368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.277 [INFO][4368] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" iface="eth0" netns="/var/run/netns/cni-bf9920c4-51b9-036e-05e0-8a3475430c8a" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.277 [INFO][4368] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" iface="eth0" netns="/var/run/netns/cni-bf9920c4-51b9-036e-05e0-8a3475430c8a" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.278 [INFO][4368] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" iface="eth0" netns="/var/run/netns/cni-bf9920c4-51b9-036e-05e0-8a3475430c8a" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.278 [INFO][4368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.278 [INFO][4368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.298 [INFO][4376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.298 [INFO][4376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.298 [INFO][4376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.308 [WARNING][4376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.308 [INFO][4376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.309 [INFO][4376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:38.313344 containerd[1436]: 2025-01-17 12:20:38.311 [INFO][4368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:38.313953 containerd[1436]: time="2025-01-17T12:20:38.313479382Z" level=info msg="TearDown network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\" successfully" Jan 17 12:20:38.313953 containerd[1436]: time="2025-01-17T12:20:38.313507862Z" level=info msg="StopPodSandbox for \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\" returns successfully" Jan 17 12:20:38.313999 kubelet[2553]: E0117 12:20:38.313822 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:38.314209 containerd[1436]: time="2025-01-17T12:20:38.314179222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g2d5f,Uid:4dfb0df8-6e80-4aa6-8102-624ee7561a47,Namespace:kube-system,Attempt:1,}" Jan 17 12:20:38.418656 systemd-networkd[1375]: cali3cf8cf6ae20: Link UP Jan 17 12:20:38.419598 systemd-networkd[1375]: cali3cf8cf6ae20: Gained carrier Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.354 [INFO][4385] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0 coredns-7db6d8ff4d- kube-system 4dfb0df8-6e80-4aa6-8102-624ee7561a47 981 0 2025-01-17 12:20:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-g2d5f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3cf8cf6ae20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.354 [INFO][4385] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.379 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" HandleID="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.391 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" HandleID="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000278770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-g2d5f", "timestamp":"2025-01-17 12:20:38.379721267 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.391 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.391 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.391 [INFO][4398] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.393 [INFO][4398] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.396 [INFO][4398] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.400 [INFO][4398] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.401 [INFO][4398] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.403 [INFO][4398] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.403 [INFO][4398] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.405 [INFO][4398] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.409 [INFO][4398] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.413 [INFO][4398] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.413 [INFO][4398] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" host="localhost" Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.413 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:38.429434 containerd[1436]: 2025-01-17 12:20:38.413 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" HandleID="k8s-pod-network.122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.430223 containerd[1436]: 2025-01-17 12:20:38.416 [INFO][4385] cni-plugin/k8s.go 386: Populated endpoint ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4dfb0df8-6e80-4aa6-8102-624ee7561a47", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-g2d5f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cf8cf6ae20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:38.430223 containerd[1436]: 2025-01-17 12:20:38.416 [INFO][4385] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.430223 containerd[1436]: 2025-01-17 12:20:38.416 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cf8cf6ae20 ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.430223 containerd[1436]: 2025-01-17 12:20:38.420 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.430223 containerd[1436]: 2025-01-17 12:20:38.420 [INFO][4385] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4dfb0df8-6e80-4aa6-8102-624ee7561a47", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f", Pod:"coredns-7db6d8ff4d-g2d5f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cf8cf6ae20", MAC:"42:6c:a0:5a:c8:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:38.430223 containerd[1436]: 2025-01-17 12:20:38.427 [INFO][4385] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g2d5f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:38.447275 containerd[1436]: time="2025-01-17T12:20:38.447004871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:38.447275 containerd[1436]: time="2025-01-17T12:20:38.447056151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:38.447275 containerd[1436]: time="2025-01-17T12:20:38.447071151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:38.447275 containerd[1436]: time="2025-01-17T12:20:38.447140231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:38.450054 systemd[1]: run-netns-cni\x2dbf9920c4\x2d51b9\x2d036e\x2d05e0\x2d8a3475430c8a.mount: Deactivated successfully. Jan 17 12:20:38.467023 systemd[1]: Started cri-containerd-122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f.scope - libcontainer container 122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f. Jan 17 12:20:38.477099 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:38.494685 containerd[1436]: time="2025-01-17T12:20:38.494509475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g2d5f,Uid:4dfb0df8-6e80-4aa6-8102-624ee7561a47,Namespace:kube-system,Attempt:1,} returns sandbox id \"122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f\"" Jan 17 12:20:38.495626 kubelet[2553]: E0117 12:20:38.495386 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:38.499543 containerd[1436]: time="2025-01-17T12:20:38.499338915Z" level=info msg="CreateContainer within sandbox \"122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:20:38.511608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount804848535.mount: Deactivated successfully. Jan 17 12:20:38.512167 containerd[1436]: time="2025-01-17T12:20:38.511954036Z" level=info msg="CreateContainer within sandbox \"122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6094cc910f93cef3fb6d4c7e65bae681608ed432e7e90718d6b6bb72dc2944f\"" Jan 17 12:20:38.513263 containerd[1436]: time="2025-01-17T12:20:38.513034596Z" level=info msg="StartContainer for \"a6094cc910f93cef3fb6d4c7e65bae681608ed432e7e90718d6b6bb72dc2944f\"" Jan 17 12:20:38.535991 systemd[1]: Started cri-containerd-a6094cc910f93cef3fb6d4c7e65bae681608ed432e7e90718d6b6bb72dc2944f.scope - libcontainer container a6094cc910f93cef3fb6d4c7e65bae681608ed432e7e90718d6b6bb72dc2944f. Jan 17 12:20:38.555741 containerd[1436]: time="2025-01-17T12:20:38.555687919Z" level=info msg="StartContainer for \"a6094cc910f93cef3fb6d4c7e65bae681608ed432e7e90718d6b6bb72dc2944f\" returns successfully" Jan 17 12:20:39.140389 systemd-networkd[1375]: caliddfa380f802: Gained IPv6LL Jan 17 12:20:39.236289 containerd[1436]: time="2025-01-17T12:20:39.236252046Z" level=info msg="StopPodSandbox for \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\"" Jan 17 12:20:39.238084 containerd[1436]: time="2025-01-17T12:20:39.237034486Z" level=info msg="StopPodSandbox for \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\"" Jan 17 12:20:39.238084 containerd[1436]: time="2025-01-17T12:20:39.237095566Z" level=info msg="StopPodSandbox for \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\"" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.312 [INFO][4560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.312 [INFO][4560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" iface="eth0" netns="/var/run/netns/cni-8377228e-499b-7752-3df7-809ca81c83f2" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.312 [INFO][4560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" iface="eth0" netns="/var/run/netns/cni-8377228e-499b-7752-3df7-809ca81c83f2" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.313 [INFO][4560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" iface="eth0" netns="/var/run/netns/cni-8377228e-499b-7752-3df7-809ca81c83f2" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.313 [INFO][4560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.313 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.337 [INFO][4583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.337 [INFO][4583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.337 [INFO][4583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.351 [WARNING][4583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.351 [INFO][4583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.352 [INFO][4583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:39.357472 containerd[1436]: 2025-01-17 12:20:39.355 [INFO][4560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:39.358378 containerd[1436]: time="2025-01-17T12:20:39.357700934Z" level=info msg="TearDown network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\" successfully" Jan 17 12:20:39.358378 containerd[1436]: time="2025-01-17T12:20:39.357739934Z" level=info msg="StopPodSandbox for \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\" returns successfully" Jan 17 12:20:39.358437 containerd[1436]: time="2025-01-17T12:20:39.358394415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-htl4l,Uid:f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.307 [INFO][4549] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.308 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" iface="eth0" netns="/var/run/netns/cni-c92bb285-3d82-e44f-d17b-076d6a580d4d" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.308 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" iface="eth0" netns="/var/run/netns/cni-c92bb285-3d82-e44f-d17b-076d6a580d4d" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.309 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" iface="eth0" netns="/var/run/netns/cni-c92bb285-3d82-e44f-d17b-076d6a580d4d" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.309 [INFO][4549] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.309 [INFO][4549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.344 [INFO][4575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.345 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.353 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.365 [WARNING][4575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.365 [INFO][4575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.368 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:39.374138 containerd[1436]: 2025-01-17 12:20:39.370 [INFO][4549] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:39.374695 containerd[1436]: time="2025-01-17T12:20:39.374306096Z" level=info msg="TearDown network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\" successfully" Jan 17 12:20:39.374695 containerd[1436]: time="2025-01-17T12:20:39.374330536Z" level=info msg="StopPodSandbox for \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\" returns successfully" Jan 17 12:20:39.375345 containerd[1436]: time="2025-01-17T12:20:39.375121816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-98hnf,Uid:f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.308 [INFO][4550] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.308 [INFO][4550] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" iface="eth0" netns="/var/run/netns/cni-fa1f8342-0859-e447-889f-a0953efd66c3" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.309 [INFO][4550] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" iface="eth0" netns="/var/run/netns/cni-fa1f8342-0859-e447-889f-a0953efd66c3" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.309 [INFO][4550] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" iface="eth0" netns="/var/run/netns/cni-fa1f8342-0859-e447-889f-a0953efd66c3" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.309 [INFO][4550] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.309 [INFO][4550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.350 [INFO][4573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.351 [INFO][4573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.368 [INFO][4573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.379 [WARNING][4573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.379 [INFO][4573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.381 [INFO][4573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:39.386751 containerd[1436]: 2025-01-17 12:20:39.384 [INFO][4550] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:39.387357 containerd[1436]: time="2025-01-17T12:20:39.387329376Z" level=info msg="TearDown network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\" successfully" Jan 17 12:20:39.387520 containerd[1436]: time="2025-01-17T12:20:39.387399616Z" level=info msg="StopPodSandbox for \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\" returns successfully" Jan 17 12:20:39.387830 kubelet[2553]: E0117 12:20:39.387807 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:39.389379 containerd[1436]: time="2025-01-17T12:20:39.389042497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4x77z,Uid:8508a90b-1cd0-4874-986a-bb0a86ee7bc2,Namespace:kube-system,Attempt:1,}" Jan 17 12:20:39.393704 kubelet[2553]: E0117 12:20:39.393487 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:39.416309 kubelet[2553]: I0117 12:20:39.416245 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g2d5f" podStartSLOduration=34.416156978 podStartE2EDuration="34.416156978s" podCreationTimestamp="2025-01-17 12:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:39.415841698 +0000 UTC m=+48.261902946" watchObservedRunningTime="2025-01-17 12:20:39.416156978 +0000 UTC m=+48.262218226" Jan 17 12:20:39.456978 systemd[1]: run-netns-cni\x2dc92bb285\x2d3d82\x2de44f\x2dd17b\x2d076d6a580d4d.mount: Deactivated successfully. Jan 17 12:20:39.457079 systemd[1]: run-netns-cni\x2d8377228e\x2d499b\x2d7752\x2d3df7\x2d809ca81c83f2.mount: Deactivated successfully. Jan 17 12:20:39.457154 systemd[1]: run-netns-cni\x2dfa1f8342\x2d0859\x2de447\x2d889f\x2da0953efd66c3.mount: Deactivated successfully. Jan 17 12:20:39.461083 systemd-networkd[1375]: cali13c5d7e8272: Gained IPv6LL Jan 17 12:20:39.613829 systemd-networkd[1375]: cali6816d2715a8: Link UP Jan 17 12:20:39.615206 systemd-networkd[1375]: cali6816d2715a8: Gained carrier Jan 17 12:20:39.619865 containerd[1436]: time="2025-01-17T12:20:39.617270312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:39.623595 containerd[1436]: time="2025-01-17T12:20:39.623456072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:20:39.626165 containerd[1436]: time="2025-01-17T12:20:39.625581712Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.514 [INFO][4597] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0 calico-apiserver-5bc4c76444- calico-apiserver f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0 995 0 2025-01-17 12:20:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bc4c76444 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bc4c76444-98hnf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6816d2715a8 [] []}} ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.514 [INFO][4597] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.553 [INFO][4648] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" HandleID="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.575 [INFO][4648] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" HandleID="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bc4c76444-98hnf", "timestamp":"2025-01-17 12:20:39.553817268 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.575 [INFO][4648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.575 [INFO][4648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.575 [INFO][4648] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.577 [INFO][4648] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.582 [INFO][4648] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.587 [INFO][4648] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.592 [INFO][4648] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.595 [INFO][4648] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.595 [INFO][4648] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.597 [INFO][4648] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424 Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.601 [INFO][4648] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.607 [INFO][4648] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.607 [INFO][4648] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" host="localhost" Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.607 [INFO][4648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:39.629943 containerd[1436]: 2025-01-17 12:20:39.607 [INFO][4648] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" HandleID="k8s-pod-network.2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.630769 containerd[1436]: 2025-01-17 12:20:39.609 [INFO][4597] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bc4c76444-98hnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6816d2715a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:39.630769 containerd[1436]: 2025-01-17 12:20:39.610 [INFO][4597] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.630769 containerd[1436]: 2025-01-17 12:20:39.610 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6816d2715a8 ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.630769 containerd[1436]: 2025-01-17 12:20:39.616 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.630769 containerd[1436]: 2025-01-17 12:20:39.616 [INFO][4597] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424", Pod:"calico-apiserver-5bc4c76444-98hnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6816d2715a8", MAC:"a6:23:1d:d3:15:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:39.630769 containerd[1436]: 2025-01-17 12:20:39.625 [INFO][4597] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-98hnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:39.631244 containerd[1436]: time="2025-01-17T12:20:39.631210673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:39.631863 containerd[1436]: time="2025-01-17T12:20:39.631822313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.966265579s" Jan 17 12:20:39.631929 containerd[1436]: time="2025-01-17T12:20:39.631866353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:20:39.634471 containerd[1436]: time="2025-01-17T12:20:39.634439313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:20:39.646512 containerd[1436]: time="2025-01-17T12:20:39.646353314Z" level=info msg="CreateContainer within sandbox \"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:20:39.656998 systemd-networkd[1375]: cali40cdfe58837: Link UP Jan 17 12:20:39.657482 systemd-networkd[1375]: cali40cdfe58837: Gained carrier Jan 17 12:20:39.670633 containerd[1436]: time="2025-01-17T12:20:39.670530075Z" level=info msg="CreateContainer within sandbox \"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\"" Jan 17 12:20:39.672029 containerd[1436]: time="2025-01-17T12:20:39.671975195Z" level=info msg="StartContainer for \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\"" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.520 [INFO][4611] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0 calico-apiserver-5bc4c76444- calico-apiserver f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9 996 0 2025-01-17 12:20:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bc4c76444 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bc4c76444-htl4l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali40cdfe58837 [] []}} ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.520 [INFO][4611] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.567 [INFO][4653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" HandleID="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.579 [INFO][4653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" HandleID="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bc4c76444-htl4l", "timestamp":"2025-01-17 12:20:39.567155148 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.579 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.607 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.607 [INFO][4653] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.610 [INFO][4653] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.616 [INFO][4653] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.623 [INFO][4653] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.626 [INFO][4653] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.632 [INFO][4653] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.633 [INFO][4653] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.635 [INFO][4653] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85 Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.641 [INFO][4653] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.649 [INFO][4653] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.649 [INFO][4653] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" host="localhost" Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.649 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:39.673389 containerd[1436]: 2025-01-17 12:20:39.649 [INFO][4653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" HandleID="k8s-pod-network.facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.674620 containerd[1436]: 2025-01-17 12:20:39.653 [INFO][4611] cni-plugin/k8s.go 386: Populated endpoint ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bc4c76444-htl4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40cdfe58837", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:39.674620 containerd[1436]: 2025-01-17 12:20:39.653 [INFO][4611] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.674620 containerd[1436]: 2025-01-17 12:20:39.653 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40cdfe58837 ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.674620 containerd[1436]: 2025-01-17 12:20:39.656 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.674620 containerd[1436]: 2025-01-17 12:20:39.658 [INFO][4611] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85", Pod:"calico-apiserver-5bc4c76444-htl4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40cdfe58837", MAC:"76:eb:da:23:f7:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:39.674620 containerd[1436]: 2025-01-17 12:20:39.669 [INFO][4611] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85" Namespace="calico-apiserver" Pod="calico-apiserver-5bc4c76444-htl4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:39.676494 containerd[1436]: time="2025-01-17T12:20:39.676406676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:39.676494 containerd[1436]: time="2025-01-17T12:20:39.676462796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:39.677143 containerd[1436]: time="2025-01-17T12:20:39.676478356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:39.680902 containerd[1436]: time="2025-01-17T12:20:39.680257316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:39.703569 systemd-networkd[1375]: cali69358c8815c: Link UP Jan 17 12:20:39.704546 systemd-networkd[1375]: cali69358c8815c: Gained carrier Jan 17 12:20:39.707042 containerd[1436]: time="2025-01-17T12:20:39.706747398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:39.707198 containerd[1436]: time="2025-01-17T12:20:39.706999438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:39.707198 containerd[1436]: time="2025-01-17T12:20:39.707036718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:39.707400 containerd[1436]: time="2025-01-17T12:20:39.707198198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.532 [INFO][4622] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0 coredns-7db6d8ff4d- kube-system 8508a90b-1cd0-4874-986a-bb0a86ee7bc2 994 0 2025-01-17 12:20:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-4x77z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69358c8815c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.532 [INFO][4622] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.577 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" HandleID="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.591 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" HandleID="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab030), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-4x77z", "timestamp":"2025-01-17 12:20:39.577596149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.591 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.649 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.649 [INFO][4660] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.655 [INFO][4660] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.667 [INFO][4660] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.676 [INFO][4660] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.679 [INFO][4660] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.682 [INFO][4660] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.682 [INFO][4660] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.684 [INFO][4660] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61 Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.688 [INFO][4660] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.696 [INFO][4660] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.697 [INFO][4660] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" host="localhost" Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.697 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:39.719968 containerd[1436]: 2025-01-17 12:20:39.697 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" HandleID="k8s-pod-network.e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.720775 containerd[1436]: 2025-01-17 12:20:39.701 [INFO][4622] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8508a90b-1cd0-4874-986a-bb0a86ee7bc2", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-4x77z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69358c8815c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:39.720775 containerd[1436]: 2025-01-17 12:20:39.701 [INFO][4622] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.720775 containerd[1436]: 2025-01-17 12:20:39.701 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69358c8815c ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.720775 containerd[1436]: 2025-01-17 12:20:39.705 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.720775 containerd[1436]: 2025-01-17 12:20:39.705 [INFO][4622] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8508a90b-1cd0-4874-986a-bb0a86ee7bc2", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61", Pod:"coredns-7db6d8ff4d-4x77z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69358c8815c", MAC:"b2:e5:37:e1:66:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:39.720775 containerd[1436]: 2025-01-17 12:20:39.715 [INFO][4622] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4x77z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:39.733049 systemd[1]: Started cri-containerd-2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424.scope - libcontainer container 2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424. Jan 17 12:20:39.734652 systemd[1]: Started cri-containerd-facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85.scope - libcontainer container facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85. Jan 17 12:20:39.738804 systemd[1]: Started cri-containerd-a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b.scope - libcontainer container a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b. Jan 17 12:20:39.753747 containerd[1436]: time="2025-01-17T12:20:39.751940241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:39.753747 containerd[1436]: time="2025-01-17T12:20:39.752468441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:39.753747 containerd[1436]: time="2025-01-17T12:20:39.752509161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:39.753747 containerd[1436]: time="2025-01-17T12:20:39.752612641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:39.754078 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:39.762971 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:39.774042 systemd[1]: Started cri-containerd-e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61.scope - libcontainer container e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61. Jan 17 12:20:39.778646 containerd[1436]: time="2025-01-17T12:20:39.778431883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-98hnf,Uid:f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424\"" Jan 17 12:20:39.787878 containerd[1436]: time="2025-01-17T12:20:39.787579923Z" level=info msg="StartContainer for \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\" returns successfully" Jan 17 12:20:39.800534 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:39.802805 containerd[1436]: time="2025-01-17T12:20:39.802701484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bc4c76444-htl4l,Uid:f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85\"" Jan 17 12:20:39.830373 containerd[1436]: time="2025-01-17T12:20:39.830323046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4x77z,Uid:8508a90b-1cd0-4874-986a-bb0a86ee7bc2,Namespace:kube-system,Attempt:1,} returns sandbox id \"e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61\"" Jan 17 12:20:39.831648 kubelet[2553]: E0117 12:20:39.831587 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:39.833485 containerd[1436]: time="2025-01-17T12:20:39.833450886Z" level=info msg="CreateContainer within sandbox \"e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:20:39.849352 containerd[1436]: time="2025-01-17T12:20:39.849252287Z" level=info msg="CreateContainer within sandbox \"e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4eeb2506f3222a27b661a6e38998525eab1dcc4bfd67b4629360fe3c77738f50\"" Jan 17 12:20:39.850197 containerd[1436]: time="2025-01-17T12:20:39.849870727Z" level=info msg="StartContainer for \"4eeb2506f3222a27b661a6e38998525eab1dcc4bfd67b4629360fe3c77738f50\"" Jan 17 12:20:39.872025 systemd[1]: Started cri-containerd-4eeb2506f3222a27b661a6e38998525eab1dcc4bfd67b4629360fe3c77738f50.scope - libcontainer container 4eeb2506f3222a27b661a6e38998525eab1dcc4bfd67b4629360fe3c77738f50. Jan 17 12:20:39.900100 containerd[1436]: time="2025-01-17T12:20:39.899973251Z" level=info msg="StartContainer for \"4eeb2506f3222a27b661a6e38998525eab1dcc4bfd67b4629360fe3c77738f50\" returns successfully" Jan 17 12:20:40.400795 kubelet[2553]: E0117 12:20:40.400764 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:40.401438 kubelet[2553]: E0117 12:20:40.401415 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:40.408718 kubelet[2553]: I0117 12:20:40.408547 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c77d568d8-sdh2z" podStartSLOduration=23.440044344 podStartE2EDuration="25.408531083s" podCreationTimestamp="2025-01-17 12:20:15 +0000 UTC" firstStartedPulling="2025-01-17 12:20:37.665241454 +0000 UTC m=+46.511302702" lastFinishedPulling="2025-01-17 12:20:39.633728193 +0000 UTC m=+48.479789441" observedRunningTime="2025-01-17 12:20:40.408206403 +0000 UTC m=+49.254267651" watchObservedRunningTime="2025-01-17 12:20:40.408531083 +0000 UTC m=+49.254592331" Jan 17 12:20:40.420148 systemd-networkd[1375]: cali3cf8cf6ae20: Gained IPv6LL Jan 17 12:20:40.422926 kubelet[2553]: I0117 12:20:40.422156 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4x77z" podStartSLOduration=35.422139284 podStartE2EDuration="35.422139284s" podCreationTimestamp="2025-01-17 12:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:40.421662684 +0000 UTC m=+49.267723932" watchObservedRunningTime="2025-01-17 12:20:40.422139284 +0000 UTC m=+49.268200492" Jan 17 12:20:40.707536 containerd[1436]: time="2025-01-17T12:20:40.707401461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:40.708646 containerd[1436]: time="2025-01-17T12:20:40.708442741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:20:40.709478 containerd[1436]: time="2025-01-17T12:20:40.709409462Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:40.711726 containerd[1436]: time="2025-01-17T12:20:40.711680742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:40.712664 containerd[1436]: time="2025-01-17T12:20:40.712498582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.078023709s" Jan 17 12:20:40.712664 containerd[1436]: time="2025-01-17T12:20:40.712529862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:20:40.713523 containerd[1436]: time="2025-01-17T12:20:40.713497782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:20:40.714487 containerd[1436]: time="2025-01-17T12:20:40.714457182Z" level=info msg="CreateContainer within sandbox \"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:20:40.728540 containerd[1436]: time="2025-01-17T12:20:40.728462983Z" level=info msg="CreateContainer within sandbox \"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9df9c4a3abd05adad8533f2698c0d9086aecee2b0fea8776262b62cf38071811\"" Jan 17 12:20:40.729382 containerd[1436]: time="2025-01-17T12:20:40.728873263Z" level=info msg="StartContainer for \"9df9c4a3abd05adad8533f2698c0d9086aecee2b0fea8776262b62cf38071811\"" Jan 17 12:20:40.755030 systemd[1]: Started cri-containerd-9df9c4a3abd05adad8533f2698c0d9086aecee2b0fea8776262b62cf38071811.scope - libcontainer container 9df9c4a3abd05adad8533f2698c0d9086aecee2b0fea8776262b62cf38071811. Jan 17 12:20:40.779508 containerd[1436]: time="2025-01-17T12:20:40.779459186Z" level=info msg="StartContainer for \"9df9c4a3abd05adad8533f2698c0d9086aecee2b0fea8776262b62cf38071811\" returns successfully" Jan 17 12:20:40.804058 systemd-networkd[1375]: cali6816d2715a8: Gained IPv6LL Jan 17 12:20:40.804872 systemd-networkd[1375]: cali69358c8815c: Gained IPv6LL Jan 17 12:20:41.059946 systemd-networkd[1375]: cali40cdfe58837: Gained IPv6LL Jan 17 12:20:41.409707 kubelet[2553]: E0117 12:20:41.409668 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:41.410035 kubelet[2553]: E0117 12:20:41.409759 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:42.348452 containerd[1436]: time="2025-01-17T12:20:42.348234397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:42.349242 containerd[1436]: time="2025-01-17T12:20:42.349079957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:20:42.350007 containerd[1436]: time="2025-01-17T12:20:42.349974397Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:42.352627 containerd[1436]: time="2025-01-17T12:20:42.352590358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:42.353248 containerd[1436]: time="2025-01-17T12:20:42.353214558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.639680416s" Jan 17 12:20:42.353320 containerd[1436]: time="2025-01-17T12:20:42.353251478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:20:42.354368 containerd[1436]: time="2025-01-17T12:20:42.354230118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:20:42.355485 containerd[1436]: time="2025-01-17T12:20:42.355457158Z" level=info msg="CreateContainer within sandbox \"2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:20:42.365508 containerd[1436]: time="2025-01-17T12:20:42.365465598Z" level=info msg="CreateContainer within sandbox \"2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"07ce00863f5a9c261b40f9cfc01f2797456d378c7ed20032a9677fa57f31d03f\"" Jan 17 12:20:42.365922 containerd[1436]: time="2025-01-17T12:20:42.365899878Z" level=info msg="StartContainer for \"07ce00863f5a9c261b40f9cfc01f2797456d378c7ed20032a9677fa57f31d03f\"" Jan 17 12:20:42.366426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860688510.mount: Deactivated successfully. Jan 17 12:20:42.394000 systemd[1]: Started cri-containerd-07ce00863f5a9c261b40f9cfc01f2797456d378c7ed20032a9677fa57f31d03f.scope - libcontainer container 07ce00863f5a9c261b40f9cfc01f2797456d378c7ed20032a9677fa57f31d03f. Jan 17 12:20:42.414899 kubelet[2553]: E0117 12:20:42.414630 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:42.420765 containerd[1436]: time="2025-01-17T12:20:42.420727801Z" level=info msg="StartContainer for \"07ce00863f5a9c261b40f9cfc01f2797456d378c7ed20032a9677fa57f31d03f\" returns successfully" Jan 17 12:20:42.570151 containerd[1436]: time="2025-01-17T12:20:42.569690330Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:42.570992 containerd[1436]: time="2025-01-17T12:20:42.570959050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:20:42.573408 containerd[1436]: time="2025-01-17T12:20:42.573368810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 219.090372ms" Jan 17 12:20:42.573519 containerd[1436]: time="2025-01-17T12:20:42.573502930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:20:42.574963 containerd[1436]: time="2025-01-17T12:20:42.574746010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:20:42.575614 containerd[1436]: time="2025-01-17T12:20:42.575586370Z" level=info msg="CreateContainer within sandbox \"facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:20:42.588108 containerd[1436]: time="2025-01-17T12:20:42.587993771Z" level=info msg="CreateContainer within sandbox \"facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c7d5698d7b38e726364dc8716d438e120fb61df1f89f5316ab286677df983e01\"" Jan 17 12:20:42.589489 containerd[1436]: time="2025-01-17T12:20:42.588368691Z" level=info msg="StartContainer for \"c7d5698d7b38e726364dc8716d438e120fb61df1f89f5316ab286677df983e01\"" Jan 17 12:20:42.623085 systemd[1]: Started cri-containerd-c7d5698d7b38e726364dc8716d438e120fb61df1f89f5316ab286677df983e01.scope - libcontainer container c7d5698d7b38e726364dc8716d438e120fb61df1f89f5316ab286677df983e01. Jan 17 12:20:42.658014 containerd[1436]: time="2025-01-17T12:20:42.657974134Z" level=info msg="StartContainer for \"c7d5698d7b38e726364dc8716d438e120fb61df1f89f5316ab286677df983e01\" returns successfully" Jan 17 12:20:43.014769 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:53640.service - OpenSSH per-connection server daemon (10.0.0.1:53640). Jan 17 12:20:43.086119 sshd[5068]: Accepted publickey for core from 10.0.0.1 port 53640 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:43.089676 sshd[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:43.101554 systemd-logind[1420]: New session 14 of user core. Jan 17 12:20:43.113985 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:20:43.341821 sshd[5068]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:43.345805 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:20:43.345995 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:53640.service: Deactivated successfully. Jan 17 12:20:43.347534 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:20:43.349555 systemd-logind[1420]: Removed session 14. Jan 17 12:20:43.428530 kubelet[2553]: I0117 12:20:43.428453 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bc4c76444-htl4l" podStartSLOduration=25.658360489 podStartE2EDuration="28.428437975s" podCreationTimestamp="2025-01-17 12:20:15 +0000 UTC" firstStartedPulling="2025-01-17 12:20:39.804227924 +0000 UTC m=+48.650289172" lastFinishedPulling="2025-01-17 12:20:42.57430541 +0000 UTC m=+51.420366658" observedRunningTime="2025-01-17 12:20:43.427737615 +0000 UTC m=+52.273798903" watchObservedRunningTime="2025-01-17 12:20:43.428437975 +0000 UTC m=+52.274499223" Jan 17 12:20:43.440685 kubelet[2553]: I0117 12:20:43.440606 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bc4c76444-98hnf" podStartSLOduration=25.867149381 podStartE2EDuration="28.440585896s" podCreationTimestamp="2025-01-17 12:20:15 +0000 UTC" firstStartedPulling="2025-01-17 12:20:39.780676043 +0000 UTC m=+48.626737251" lastFinishedPulling="2025-01-17 12:20:42.354112518 +0000 UTC m=+51.200173766" observedRunningTime="2025-01-17 12:20:43.440258336 +0000 UTC m=+52.286319584" watchObservedRunningTime="2025-01-17 12:20:43.440585896 +0000 UTC m=+52.286647144" Jan 17 12:20:43.717341 containerd[1436]: time="2025-01-17T12:20:43.717290510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:43.718562 containerd[1436]: time="2025-01-17T12:20:43.718523470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:20:43.720157 containerd[1436]: time="2025-01-17T12:20:43.720122190Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:43.722303 containerd[1436]: time="2025-01-17T12:20:43.722268550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:43.723611 containerd[1436]: time="2025-01-17T12:20:43.723394230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.14861366s" Jan 17 12:20:43.723668 containerd[1436]: time="2025-01-17T12:20:43.723615750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:20:43.727089 containerd[1436]: time="2025-01-17T12:20:43.727041471Z" level=info msg="CreateContainer within sandbox \"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:20:43.738611 containerd[1436]: time="2025-01-17T12:20:43.738558591Z" level=info msg="CreateContainer within sandbox \"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fffeb302515b904c77da8ab9e9b22a25e49dffa4a4dfb93580fe6096e877dfbd\"" Jan 17 12:20:43.739534 containerd[1436]: time="2025-01-17T12:20:43.739417231Z" level=info msg="StartContainer for \"fffeb302515b904c77da8ab9e9b22a25e49dffa4a4dfb93580fe6096e877dfbd\"" Jan 17 12:20:43.773213 systemd[1]: Started cri-containerd-fffeb302515b904c77da8ab9e9b22a25e49dffa4a4dfb93580fe6096e877dfbd.scope - libcontainer container fffeb302515b904c77da8ab9e9b22a25e49dffa4a4dfb93580fe6096e877dfbd. Jan 17 12:20:43.802113 containerd[1436]: time="2025-01-17T12:20:43.802069234Z" level=info msg="StartContainer for \"fffeb302515b904c77da8ab9e9b22a25e49dffa4a4dfb93580fe6096e877dfbd\" returns successfully" Jan 17 12:20:44.314585 kubelet[2553]: I0117 12:20:44.314518 2553 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:20:44.320202 kubelet[2553]: I0117 12:20:44.320176 2553 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:20:44.423785 kubelet[2553]: I0117 12:20:44.423762 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:20:44.424059 kubelet[2553]: I0117 12:20:44.423916 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:20:44.439593 kubelet[2553]: I0117 12:20:44.439521 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hjdnv" podStartSLOduration=23.390907651 podStartE2EDuration="29.439505546s" podCreationTimestamp="2025-01-17 12:20:15 +0000 UTC" firstStartedPulling="2025-01-17 12:20:37.675745255 +0000 UTC m=+46.521806463" lastFinishedPulling="2025-01-17 12:20:43.72434311 +0000 UTC m=+52.570404358" observedRunningTime="2025-01-17 12:20:44.438167586 +0000 UTC m=+53.284228834" watchObservedRunningTime="2025-01-17 12:20:44.439505546 +0000 UTC m=+53.285566794" Jan 17 12:20:46.899610 kubelet[2553]: E0117 12:20:46.899574 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:48.360449 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:53646.service - OpenSSH per-connection server daemon (10.0.0.1:53646). Jan 17 12:20:48.413221 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 53646 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:48.414675 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:48.418547 systemd-logind[1420]: New session 15 of user core. Jan 17 12:20:48.423992 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:20:48.598166 sshd[5158]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:48.601661 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:53646.service: Deactivated successfully. Jan 17 12:20:48.603447 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:20:48.604005 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:20:48.604798 systemd-logind[1420]: Removed session 15. Jan 17 12:20:51.223241 containerd[1436]: time="2025-01-17T12:20:51.222901332Z" level=info msg="StopPodSandbox for \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\"" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.285 [WARNING][5197] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hjdnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"498cd002-4959-4e1e-94d0-79dfca8e8ebe", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3", Pod:"csi-node-driver-hjdnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13c5d7e8272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.286 [INFO][5197] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.286 [INFO][5197] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" iface="eth0" netns="" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.286 [INFO][5197] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.286 [INFO][5197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.306 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.306 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.307 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.315 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.315 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.317 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.320500 containerd[1436]: 2025-01-17 12:20:51.318 [INFO][5197] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.321057 containerd[1436]: time="2025-01-17T12:20:51.320534175Z" level=info msg="TearDown network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\" successfully" Jan 17 12:20:51.321057 containerd[1436]: time="2025-01-17T12:20:51.320558215Z" level=info msg="StopPodSandbox for \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\" returns successfully" Jan 17 12:20:51.321393 containerd[1436]: time="2025-01-17T12:20:51.321297215Z" level=info msg="RemovePodSandbox for \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\"" Jan 17 12:20:51.331710 containerd[1436]: time="2025-01-17T12:20:51.331659135Z" level=info msg="Forcibly stopping sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\"" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.365 [WARNING][5228] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hjdnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"498cd002-4959-4e1e-94d0-79dfca8e8ebe", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"680b961849b3b79094fcea1a33aa1cc48da16d0402d871020ac33c6f2b4558e3", Pod:"csi-node-driver-hjdnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13c5d7e8272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.365 [INFO][5228] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.365 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" iface="eth0" netns="" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.365 [INFO][5228] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.365 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.389 [INFO][5235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.389 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.389 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.400 [WARNING][5235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.400 [INFO][5235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" HandleID="k8s-pod-network.bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Workload="localhost-k8s-csi--node--driver--hjdnv-eth0" Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.401 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.404918 containerd[1436]: 2025-01-17 12:20:51.403 [INFO][5228] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc" Jan 17 12:20:51.405981 containerd[1436]: time="2025-01-17T12:20:51.404897297Z" level=info msg="TearDown network for sandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\" successfully" Jan 17 12:20:51.435532 containerd[1436]: time="2025-01-17T12:20:51.435463258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:51.435670 containerd[1436]: time="2025-01-17T12:20:51.435570058Z" level=info msg="RemovePodSandbox \"bf602e29e3f68533d2a1301d89a7976f376d8f42d029becc99be0bf10b0a01cc\" returns successfully" Jan 17 12:20:51.436417 containerd[1436]: time="2025-01-17T12:20:51.436291578Z" level=info msg="StopPodSandbox for \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\"" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.474 [WARNING][5260] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8508a90b-1cd0-4874-986a-bb0a86ee7bc2", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61", Pod:"coredns-7db6d8ff4d-4x77z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69358c8815c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.474 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.474 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" iface="eth0" netns="" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.474 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.474 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.499 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.499 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.499 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.507 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.507 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.509 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.511962 containerd[1436]: 2025-01-17 12:20:51.510 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.511962 containerd[1436]: time="2025-01-17T12:20:51.511829221Z" level=info msg="TearDown network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\" successfully" Jan 17 12:20:51.511962 containerd[1436]: time="2025-01-17T12:20:51.511892981Z" level=info msg="StopPodSandbox for \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\" returns successfully" Jan 17 12:20:51.513247 containerd[1436]: time="2025-01-17T12:20:51.513088901Z" level=info msg="RemovePodSandbox for \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\"" Jan 17 12:20:51.513794 containerd[1436]: time="2025-01-17T12:20:51.513344901Z" level=info msg="Forcibly stopping sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\"" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.548 [WARNING][5290] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8508a90b-1cd0-4874-986a-bb0a86ee7bc2", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e75b15bdf71fdc99cb50b433b7b2e5d8a715e4332ad517f3b115ec41174bfb61", Pod:"coredns-7db6d8ff4d-4x77z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69358c8815c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.549 [INFO][5290] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.549 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" iface="eth0" netns="" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.549 [INFO][5290] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.549 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.569 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.570 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.570 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.578 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.578 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" HandleID="k8s-pod-network.881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Workload="localhost-k8s-coredns--7db6d8ff4d--4x77z-eth0" Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.580 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.582954 containerd[1436]: 2025-01-17 12:20:51.581 [INFO][5290] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61" Jan 17 12:20:51.583366 containerd[1436]: time="2025-01-17T12:20:51.583004503Z" level=info msg="TearDown network for sandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\" successfully" Jan 17 12:20:51.586032 containerd[1436]: time="2025-01-17T12:20:51.585969663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:51.586118 containerd[1436]: time="2025-01-17T12:20:51.586057263Z" level=info msg="RemovePodSandbox \"881f1556f7cbe0b2b05b6cd295267f174f7fd333c8bbe3743ab6b36523805e61\" returns successfully" Jan 17 12:20:51.586588 containerd[1436]: time="2025-01-17T12:20:51.586497343Z" level=info msg="StopPodSandbox for \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\"" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.623 [WARNING][5319] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0", GenerateName:"calico-kube-controllers-5c77d568d8-", Namespace:"calico-system", SelfLink:"", UID:"29d30ae5-d85e-4d42-ab11-9579ef57e019", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c77d568d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27", Pod:"calico-kube-controllers-5c77d568d8-sdh2z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddfa380f802", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.623 [INFO][5319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.623 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" iface="eth0" netns="" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.623 [INFO][5319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.623 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.642 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.643 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.643 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.651 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.651 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.652 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.655735 containerd[1436]: 2025-01-17 12:20:51.654 [INFO][5319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.655735 containerd[1436]: time="2025-01-17T12:20:51.655544465Z" level=info msg="TearDown network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\" successfully" Jan 17 12:20:51.655735 containerd[1436]: time="2025-01-17T12:20:51.655567545Z" level=info msg="StopPodSandbox for \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\" returns successfully" Jan 17 12:20:51.656300 containerd[1436]: time="2025-01-17T12:20:51.656124105Z" level=info msg="RemovePodSandbox for \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\"" Jan 17 12:20:51.656300 containerd[1436]: time="2025-01-17T12:20:51.656150185Z" level=info msg="Forcibly stopping sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\"" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.690 [WARNING][5349] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0", GenerateName:"calico-kube-controllers-5c77d568d8-", Namespace:"calico-system", SelfLink:"", UID:"29d30ae5-d85e-4d42-ab11-9579ef57e019", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c77d568d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27", Pod:"calico-kube-controllers-5c77d568d8-sdh2z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddfa380f802", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.690 [INFO][5349] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.690 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" iface="eth0" netns="" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.690 [INFO][5349] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.690 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.709 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.709 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.709 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.717 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.717 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" HandleID="k8s-pod-network.ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.719 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.721980 containerd[1436]: 2025-01-17 12:20:51.720 [INFO][5349] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1" Jan 17 12:20:51.722475 containerd[1436]: time="2025-01-17T12:20:51.722010027Z" level=info msg="TearDown network for sandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\" successfully" Jan 17 12:20:51.724934 containerd[1436]: time="2025-01-17T12:20:51.724822587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:51.725033 containerd[1436]: time="2025-01-17T12:20:51.724992467Z" level=info msg="RemovePodSandbox \"ca3472202ee721c29a3fe28379dce18cdd2da28c4832bd1c2b0857b56a8fa4b1\" returns successfully" Jan 17 12:20:51.725590 containerd[1436]: time="2025-01-17T12:20:51.725564227Z" level=info msg="StopPodSandbox for \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\"" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.761 [WARNING][5379] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85", Pod:"calico-apiserver-5bc4c76444-htl4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40cdfe58837", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.761 [INFO][5379] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.761 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" iface="eth0" netns="" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.761 [INFO][5379] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.761 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.781 [INFO][5387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.781 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.781 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.789 [WARNING][5387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.789 [INFO][5387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.791 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.793956 containerd[1436]: 2025-01-17 12:20:51.792 [INFO][5379] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.793956 containerd[1436]: time="2025-01-17T12:20:51.793787989Z" level=info msg="TearDown network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\" successfully" Jan 17 12:20:51.793956 containerd[1436]: time="2025-01-17T12:20:51.793811869Z" level=info msg="StopPodSandbox for \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\" returns successfully" Jan 17 12:20:51.794557 containerd[1436]: time="2025-01-17T12:20:51.794321909Z" level=info msg="RemovePodSandbox for \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\"" Jan 17 12:20:51.794557 containerd[1436]: time="2025-01-17T12:20:51.794366749Z" level=info msg="Forcibly stopping sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\"" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.828 [WARNING][5411] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5f42a5a-0912-41d4-9a34-4a6f23b3d2e9", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"facec8eaa1c610b70ae8929e70c7f88d1b19a7aaea7b09596b70812884145b85", Pod:"calico-apiserver-5bc4c76444-htl4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40cdfe58837", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.828 [INFO][5411] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.828 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" iface="eth0" netns="" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.828 [INFO][5411] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.828 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.845 [INFO][5419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.846 [INFO][5419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.846 [INFO][5419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.854 [WARNING][5419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.854 [INFO][5419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" HandleID="k8s-pod-network.a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Workload="localhost-k8s-calico--apiserver--5bc4c76444--htl4l-eth0" Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.855 [INFO][5419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.858768 containerd[1436]: 2025-01-17 12:20:51.857 [INFO][5411] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3" Jan 17 12:20:51.859227 containerd[1436]: time="2025-01-17T12:20:51.858806031Z" level=info msg="TearDown network for sandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\" successfully" Jan 17 12:20:51.861405 containerd[1436]: time="2025-01-17T12:20:51.861358111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:51.861495 containerd[1436]: time="2025-01-17T12:20:51.861413271Z" level=info msg="RemovePodSandbox \"a678259acaf499ebbb617f6fbfef55d35fd3d58f02dac7af815d409510622fe3\" returns successfully" Jan 17 12:20:51.861948 containerd[1436]: time="2025-01-17T12:20:51.861913511Z" level=info msg="StopPodSandbox for \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\"" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.894 [WARNING][5442] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424", Pod:"calico-apiserver-5bc4c76444-98hnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6816d2715a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.894 [INFO][5442] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.894 [INFO][5442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" iface="eth0" netns="" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.894 [INFO][5442] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.894 [INFO][5442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.911 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.912 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.912 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.919 [WARNING][5449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.919 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.921 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.923731 containerd[1436]: 2025-01-17 12:20:51.922 [INFO][5442] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.924189 containerd[1436]: time="2025-01-17T12:20:51.923757273Z" level=info msg="TearDown network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\" successfully" Jan 17 12:20:51.924189 containerd[1436]: time="2025-01-17T12:20:51.923780433Z" level=info msg="StopPodSandbox for \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\" returns successfully" Jan 17 12:20:51.924331 containerd[1436]: time="2025-01-17T12:20:51.924288833Z" level=info msg="RemovePodSandbox for \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\"" Jan 17 12:20:51.924368 containerd[1436]: time="2025-01-17T12:20:51.924325553Z" level=info msg="Forcibly stopping sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\"" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.958 [WARNING][5472] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0", GenerateName:"calico-apiserver-5bc4c76444-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b4b3f1-b354-4daf-b595-1f9fa5ab20a0", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bc4c76444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2074ad56da4b902925f19a13669a21cccbad8d39a81903bf83a0ac4978792424", Pod:"calico-apiserver-5bc4c76444-98hnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6816d2715a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.958 [INFO][5472] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.958 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" iface="eth0" netns="" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.958 [INFO][5472] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.958 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.977 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.977 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.977 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.985 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.985 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" HandleID="k8s-pod-network.21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Workload="localhost-k8s-calico--apiserver--5bc4c76444--98hnf-eth0" Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.986 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:51.989188 containerd[1436]: 2025-01-17 12:20:51.987 [INFO][5472] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1" Jan 17 12:20:51.989579 containerd[1436]: time="2025-01-17T12:20:51.989193915Z" level=info msg="TearDown network for sandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\" successfully" Jan 17 12:20:52.000177 containerd[1436]: time="2025-01-17T12:20:52.000135876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:52.000282 containerd[1436]: time="2025-01-17T12:20:52.000201996Z" level=info msg="RemovePodSandbox \"21f70401dca4e360360e80ea1c23fd07fd7d0525a4e27937b540fff293f075a1\" returns successfully" Jan 17 12:20:52.000823 containerd[1436]: time="2025-01-17T12:20:52.000736556Z" level=info msg="StopPodSandbox for \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\"" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.033 [WARNING][5501] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4dfb0df8-6e80-4aa6-8102-624ee7561a47", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f", Pod:"coredns-7db6d8ff4d-g2d5f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cf8cf6ae20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.033 [INFO][5501] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.033 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" iface="eth0" netns="" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.033 [INFO][5501] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.033 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.051 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.051 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.051 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.060 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.061 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.062 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:52.065792 containerd[1436]: 2025-01-17 12:20:52.064 [INFO][5501] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.065792 containerd[1436]: time="2025-01-17T12:20:52.065512718Z" level=info msg="TearDown network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\" successfully" Jan 17 12:20:52.065792 containerd[1436]: time="2025-01-17T12:20:52.065536598Z" level=info msg="StopPodSandbox for \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\" returns successfully" Jan 17 12:20:52.066801 containerd[1436]: time="2025-01-17T12:20:52.066622358Z" level=info msg="RemovePodSandbox for \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\"" Jan 17 12:20:52.066801 containerd[1436]: time="2025-01-17T12:20:52.066653198Z" level=info msg="Forcibly stopping sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\"" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.102 [WARNING][5530] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4dfb0df8-6e80-4aa6-8102-624ee7561a47", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"122192b233e370147aca5ea6e10440ef9bdc1958837d6b2274cf190ef970864f", Pod:"coredns-7db6d8ff4d-g2d5f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cf8cf6ae20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.102 [INFO][5530] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.102 [INFO][5530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" iface="eth0" netns="" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.102 [INFO][5530] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.102 [INFO][5530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.121 [INFO][5538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.121 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.122 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.129 [WARNING][5538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.130 [INFO][5538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" HandleID="k8s-pod-network.648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Workload="localhost-k8s-coredns--7db6d8ff4d--g2d5f-eth0" Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.131 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:52.134022 containerd[1436]: 2025-01-17 12:20:52.132 [INFO][5530] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47" Jan 17 12:20:52.134427 containerd[1436]: time="2025-01-17T12:20:52.134054920Z" level=info msg="TearDown network for sandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\" successfully" Jan 17 12:20:52.136788 containerd[1436]: time="2025-01-17T12:20:52.136741400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:52.136876 containerd[1436]: time="2025-01-17T12:20:52.136796920Z" level=info msg="RemovePodSandbox \"648e3b5d6f1c1c876702192d47b3bcff3e44073626f1938e085aba9826d00c47\" returns successfully" Jan 17 12:20:53.132415 systemd[1]: run-containerd-runc-k8s.io-a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b-runc.AwUe5r.mount: Deactivated successfully. Jan 17 12:20:53.614346 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:56446.service - OpenSSH per-connection server daemon (10.0.0.1:56446). Jan 17 12:20:53.677501 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 56446 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:53.680822 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:53.684675 systemd-logind[1420]: New session 16 of user core. Jan 17 12:20:53.700989 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:20:53.864707 sshd[5587]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:53.874413 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:56446.service: Deactivated successfully. Jan 17 12:20:53.876093 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:20:53.877903 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:20:53.888133 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:56462.service - OpenSSH per-connection server daemon (10.0.0.1:56462). Jan 17 12:20:53.889072 systemd-logind[1420]: Removed session 16. Jan 17 12:20:53.918177 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 56462 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:53.919472 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:53.923335 systemd-logind[1420]: New session 17 of user core. Jan 17 12:20:53.937022 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:20:54.167923 sshd[5602]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:54.177367 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:56462.service: Deactivated successfully. Jan 17 12:20:54.179244 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:20:54.180526 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:20:54.188130 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:56470.service - OpenSSH per-connection server daemon (10.0.0.1:56470). Jan 17 12:20:54.189051 systemd-logind[1420]: Removed session 17. Jan 17 12:20:54.220654 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 56470 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:54.221820 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:54.225786 systemd-logind[1420]: New session 18 of user core. Jan 17 12:20:54.239006 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:20:55.676622 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:55.685427 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:56470.service: Deactivated successfully. Jan 17 12:20:55.688300 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:20:55.690881 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:20:55.699319 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:56486.service - OpenSSH per-connection server daemon (10.0.0.1:56486). Jan 17 12:20:55.701034 systemd-logind[1420]: Removed session 18. Jan 17 12:20:55.746880 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 56486 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:55.748314 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:55.752114 systemd-logind[1420]: New session 19 of user core. Jan 17 12:20:55.763997 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:20:56.029917 sshd[5636]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:56.036289 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:56486.service: Deactivated successfully. Jan 17 12:20:56.039093 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:20:56.040723 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:20:56.047322 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:56494.service - OpenSSH per-connection server daemon (10.0.0.1:56494). Jan 17 12:20:56.049056 systemd-logind[1420]: Removed session 19. Jan 17 12:20:56.080501 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 56494 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:20:56.081775 sshd[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:56.086467 systemd-logind[1420]: New session 20 of user core. Jan 17 12:20:56.094988 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:20:56.265088 sshd[5649]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:56.268400 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:56494.service: Deactivated successfully. Jan 17 12:20:56.270078 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:20:56.270826 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:20:56.272134 systemd-logind[1420]: Removed session 20. Jan 17 12:21:01.285093 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:56510.service - OpenSSH per-connection server daemon (10.0.0.1:56510). Jan 17 12:21:01.314931 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 56510 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:21:01.316147 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:01.320140 systemd-logind[1420]: New session 21 of user core. Jan 17 12:21:01.328059 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:21:01.448365 sshd[5669]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:01.451711 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:56510.service: Deactivated successfully. Jan 17 12:21:01.453515 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:21:01.454115 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:21:01.455083 systemd-logind[1420]: Removed session 21. Jan 17 12:21:04.236746 kubelet[2553]: I0117 12:21:04.236698 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:21:05.296995 kubelet[2553]: I0117 12:21:05.296944 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:21:06.458408 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:35980.service - OpenSSH per-connection server daemon (10.0.0.1:35980). Jan 17 12:21:06.492191 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 35980 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:21:06.493422 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:06.497381 systemd-logind[1420]: New session 22 of user core. Jan 17 12:21:06.507982 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:21:06.614435 sshd[5690]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:06.617461 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:35980.service: Deactivated successfully. Jan 17 12:21:06.620254 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:21:06.620925 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:21:06.621887 systemd-logind[1420]: Removed session 22. Jan 17 12:21:10.764926 containerd[1436]: time="2025-01-17T12:21:10.764831885Z" level=info msg="StopContainer for \"b558568cafed8545825fde9c14ee5c18f09e21d7e84fbf5794a122643fe956b9\" with timeout 300 (s)" Jan 17 12:21:10.766201 containerd[1436]: time="2025-01-17T12:21:10.765401568Z" level=info msg="Stop container \"b558568cafed8545825fde9c14ee5c18f09e21d7e84fbf5794a122643fe956b9\" with signal terminated" Jan 17 12:21:10.842456 containerd[1436]: time="2025-01-17T12:21:10.842413681Z" level=info msg="StopContainer for \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\" with timeout 30 (s)" Jan 17 12:21:10.842799 containerd[1436]: time="2025-01-17T12:21:10.842776403Z" level=info msg="Stop container \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\" with signal terminated" Jan 17 12:21:10.856592 systemd[1]: cri-containerd-a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b.scope: Deactivated successfully. Jan 17 12:21:10.883094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b-rootfs.mount: Deactivated successfully. Jan 17 12:21:10.887537 containerd[1436]: time="2025-01-17T12:21:10.881448200Z" level=info msg="shim disconnected" id=a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b namespace=k8s.io Jan 17 12:21:10.887654 containerd[1436]: time="2025-01-17T12:21:10.887540425Z" level=warning msg="cleaning up after shim disconnected" id=a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b namespace=k8s.io Jan 17 12:21:10.887654 containerd[1436]: time="2025-01-17T12:21:10.887560585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:21:10.914731 containerd[1436]: time="2025-01-17T12:21:10.914535055Z" level=info msg="StopContainer for \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\" returns successfully" Jan 17 12:21:10.915296 containerd[1436]: time="2025-01-17T12:21:10.915269738Z" level=info msg="StopPodSandbox for \"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27\"" Jan 17 12:21:10.915343 containerd[1436]: time="2025-01-17T12:21:10.915306058Z" level=info msg="Container to stop \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:21:10.918354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27-shm.mount: Deactivated successfully. Jan 17 12:21:10.924075 systemd[1]: cri-containerd-2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27.scope: Deactivated successfully. Jan 17 12:21:10.952650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27-rootfs.mount: Deactivated successfully. Jan 17 12:21:10.953417 containerd[1436]: time="2025-01-17T12:21:10.953349693Z" level=info msg="shim disconnected" id=2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27 namespace=k8s.io Jan 17 12:21:10.953417 containerd[1436]: time="2025-01-17T12:21:10.953414934Z" level=warning msg="cleaning up after shim disconnected" id=2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27 namespace=k8s.io Jan 17 12:21:10.953510 containerd[1436]: time="2025-01-17T12:21:10.953423494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:21:11.035116 systemd-networkd[1375]: caliddfa380f802: Link DOWN Jan 17 12:21:11.035122 systemd-networkd[1375]: caliddfa380f802: Lost carrier Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.033 [INFO][5795] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.033 [INFO][5795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" iface="eth0" netns="/var/run/netns/cni-b8ca3f91-9ea1-000a-f66e-1869ab21308d" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.033 [INFO][5795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" iface="eth0" netns="/var/run/netns/cni-b8ca3f91-9ea1-000a-f66e-1869ab21308d" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.055 [INFO][5795] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" after=21.690286ms iface="eth0" netns="/var/run/netns/cni-b8ca3f91-9ea1-000a-f66e-1869ab21308d" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.055 [INFO][5795] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.055 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.087 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" HandleID="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.087 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.087 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.138 [INFO][5804] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" HandleID="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.139 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" HandleID="k8s-pod-network.2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Workload="localhost-k8s-calico--kube--controllers--5c77d568d8--sdh2z-eth0" Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.140 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:21:11.144171 containerd[1436]: 2025-01-17 12:21:11.142 [INFO][5795] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27" Jan 17 12:21:11.145325 containerd[1436]: time="2025-01-17T12:21:11.144416016Z" level=info msg="TearDown network for sandbox \"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27\" successfully" Jan 17 12:21:11.145325 containerd[1436]: time="2025-01-17T12:21:11.144443177Z" level=info msg="StopPodSandbox for \"2d7239e94bd23bcdd7e9ae7f4379c9611f9fe47c0cb4ae0263e4765522b58b27\" returns successfully" Jan 17 12:21:11.146535 systemd[1]: run-netns-cni\x2db8ca3f91\x2d9ea1\x2d000a\x2df66e\x2d1869ab21308d.mount: Deactivated successfully. Jan 17 12:21:11.247829 kubelet[2553]: I0117 12:21:11.247692 2553 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlj92\" (UniqueName: \"kubernetes.io/projected/29d30ae5-d85e-4d42-ab11-9579ef57e019-kube-api-access-zlj92\") pod \"29d30ae5-d85e-4d42-ab11-9579ef57e019\" (UID: \"29d30ae5-d85e-4d42-ab11-9579ef57e019\") " Jan 17 12:21:11.247829 kubelet[2553]: I0117 12:21:11.247771 2553 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d30ae5-d85e-4d42-ab11-9579ef57e019-tigera-ca-bundle\") pod \"29d30ae5-d85e-4d42-ab11-9579ef57e019\" (UID: \"29d30ae5-d85e-4d42-ab11-9579ef57e019\") " Jan 17 12:21:11.254681 systemd[1]: var-lib-kubelet-pods-29d30ae5\x2dd85e\x2d4d42\x2dab11\x2d9579ef57e019-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzlj92.mount: Deactivated successfully. Jan 17 12:21:11.256813 kubelet[2553]: I0117 12:21:11.256778 2553 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d30ae5-d85e-4d42-ab11-9579ef57e019-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "29d30ae5-d85e-4d42-ab11-9579ef57e019" (UID: "29d30ae5-d85e-4d42-ab11-9579ef57e019"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:21:11.256922 kubelet[2553]: I0117 12:21:11.256842 2553 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d30ae5-d85e-4d42-ab11-9579ef57e019-kube-api-access-zlj92" (OuterVolumeSpecName: "kube-api-access-zlj92") pod "29d30ae5-d85e-4d42-ab11-9579ef57e019" (UID: "29d30ae5-d85e-4d42-ab11-9579ef57e019"). InnerVolumeSpecName "kube-api-access-zlj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:21:11.348889 kubelet[2553]: I0117 12:21:11.348745 2553 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zlj92\" (UniqueName: \"kubernetes.io/projected/29d30ae5-d85e-4d42-ab11-9579ef57e019-kube-api-access-zlj92\") on node \"localhost\" DevicePath \"\"" Jan 17 12:21:11.348889 kubelet[2553]: I0117 12:21:11.348778 2553 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d30ae5-d85e-4d42-ab11-9579ef57e019-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 17 12:21:11.489869 kubelet[2553]: I0117 12:21:11.487501 2553 scope.go:117] "RemoveContainer" containerID="a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b" Jan 17 12:21:11.492000 containerd[1436]: time="2025-01-17T12:21:11.491823435Z" level=info msg="RemoveContainer for \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\"" Jan 17 12:21:11.493678 systemd[1]: Removed slice kubepods-besteffort-pod29d30ae5_d85e_4d42_ab11_9579ef57e019.slice - libcontainer container kubepods-besteffort-pod29d30ae5_d85e_4d42_ab11_9579ef57e019.slice. Jan 17 12:21:11.497682 containerd[1436]: time="2025-01-17T12:21:11.497643418Z" level=info msg="RemoveContainer for \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\" returns successfully" Jan 17 12:21:11.498334 kubelet[2553]: I0117 12:21:11.497951 2553 scope.go:117] "RemoveContainer" containerID="a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b" Jan 17 12:21:11.506237 containerd[1436]: time="2025-01-17T12:21:11.506176052Z" level=error msg="ContainerStatus for \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\": not found" Jan 17 12:21:11.506601 kubelet[2553]: E0117 12:21:11.506497 2553 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\": not found" containerID="a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b" Jan 17 12:21:11.506601 kubelet[2553]: I0117 12:21:11.506570 2553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b"} err="failed to get container status \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a60bdc94b9c634a3cc4b94895da0dbd8631094ecccb53f3afded448ad0a3712b\": not found" Jan 17 12:21:11.527203 kubelet[2553]: I0117 12:21:11.527156 2553 topology_manager.go:215] "Topology Admit Handler" podUID="555679b9-091d-4e97-ba7f-22385a5f0537" podNamespace="calico-system" podName="calico-kube-controllers-59d9bbdf4b-9kfjl" Jan 17 12:21:11.527884 kubelet[2553]: E0117 12:21:11.527783 2553 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29d30ae5-d85e-4d42-ab11-9579ef57e019" containerName="calico-kube-controllers" Jan 17 12:21:11.527884 kubelet[2553]: I0117 12:21:11.527837 2553 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d30ae5-d85e-4d42-ab11-9579ef57e019" containerName="calico-kube-controllers" Jan 17 12:21:11.539251 systemd[1]: Created slice kubepods-besteffort-pod555679b9_091d_4e97_ba7f_22385a5f0537.slice - libcontainer container kubepods-besteffort-pod555679b9_091d_4e97_ba7f_22385a5f0537.slice. Jan 17 12:21:11.627674 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:35992.service - OpenSSH per-connection server daemon (10.0.0.1:35992). Jan 17 12:21:11.651027 kubelet[2553]: I0117 12:21:11.650974 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t655\" (UniqueName: \"kubernetes.io/projected/555679b9-091d-4e97-ba7f-22385a5f0537-kube-api-access-8t655\") pod \"calico-kube-controllers-59d9bbdf4b-9kfjl\" (UID: \"555679b9-091d-4e97-ba7f-22385a5f0537\") " pod="calico-system/calico-kube-controllers-59d9bbdf4b-9kfjl" Jan 17 12:21:11.651159 kubelet[2553]: I0117 12:21:11.651052 2553 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/555679b9-091d-4e97-ba7f-22385a5f0537-tigera-ca-bundle\") pod \"calico-kube-controllers-59d9bbdf4b-9kfjl\" (UID: \"555679b9-091d-4e97-ba7f-22385a5f0537\") " pod="calico-system/calico-kube-controllers-59d9bbdf4b-9kfjl" Jan 17 12:21:11.682626 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 35992 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:21:11.684189 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:11.687602 systemd-logind[1420]: New session 23 of user core. Jan 17 12:21:11.701022 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:21:11.825334 sshd[5820]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:11.829152 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:35992.service: Deactivated successfully. Jan 17 12:21:11.831910 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:21:11.832483 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:21:11.833314 systemd-logind[1420]: Removed session 23. Jan 17 12:21:11.842751 containerd[1436]: time="2025-01-17T12:21:11.842714747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d9bbdf4b-9kfjl,Uid:555679b9-091d-4e97-ba7f-22385a5f0537,Namespace:calico-system,Attempt:0,}" Jan 17 12:21:11.886240 systemd[1]: var-lib-kubelet-pods-29d30ae5\x2dd85e\x2d4d42\x2dab11\x2d9579ef57e019-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 17 12:21:11.964106 systemd-networkd[1375]: cali84a4481bf97: Link UP Jan 17 12:21:11.964309 systemd-networkd[1375]: cali84a4481bf97: Gained carrier Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.885 [INFO][5843] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0 calico-kube-controllers-59d9bbdf4b- calico-system 555679b9-091d-4e97-ba7f-22385a5f0537 1331 0 2025-01-17 12:21:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59d9bbdf4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59d9bbdf4b-9kfjl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali84a4481bf97 [] []}} ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.885 [INFO][5843] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.913 [INFO][5856] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" HandleID="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Workload="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.925 [INFO][5856] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" HandleID="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Workload="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59d9bbdf4b-9kfjl", "timestamp":"2025-01-17 12:21:11.913284467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.925 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.925 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.925 [INFO][5856] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.926 [INFO][5856] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.932 [INFO][5856] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.937 [INFO][5856] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.939 [INFO][5856] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.941 [INFO][5856] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.941 [INFO][5856] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.942 [INFO][5856] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332 Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.948 [INFO][5856] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.959 [INFO][5856] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.959 [INFO][5856] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" host="localhost" Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.959 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:21:11.973293 containerd[1436]: 2025-01-17 12:21:11.959 [INFO][5856] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" HandleID="k8s-pod-network.08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Workload="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" Jan 17 12:21:11.973795 containerd[1436]: 2025-01-17 12:21:11.961 [INFO][5843] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0", GenerateName:"calico-kube-controllers-59d9bbdf4b-", Namespace:"calico-system", SelfLink:"", UID:"555679b9-091d-4e97-ba7f-22385a5f0537", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59d9bbdf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59d9bbdf4b-9kfjl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84a4481bf97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:21:11.973795 containerd[1436]: 2025-01-17 12:21:11.961 [INFO][5843] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" Jan 17 12:21:11.973795 containerd[1436]: 2025-01-17 12:21:11.961 [INFO][5843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84a4481bf97 ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" Jan 17 12:21:11.973795 containerd[1436]: 2025-01-17 12:21:11.964 [INFO][5843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" Jan 17 12:21:11.973795 containerd[1436]: 2025-01-17 12:21:11.964 [INFO][5843] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0", GenerateName:"calico-kube-controllers-59d9bbdf4b-", Namespace:"calico-system", SelfLink:"", UID:"555679b9-091d-4e97-ba7f-22385a5f0537", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59d9bbdf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332", Pod:"calico-kube-controllers-59d9bbdf4b-9kfjl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84a4481bf97", MAC:"4a:66:bf:47:41:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:21:11.973795 containerd[1436]: 2025-01-17 12:21:11.971 [INFO][5843] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332" Namespace="calico-system" Pod="calico-kube-controllers-59d9bbdf4b-9kfjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d9bbdf4b--9kfjl-eth0" Jan 17 12:21:11.990188 containerd[1436]: time="2025-01-17T12:21:11.990103132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:11.990296 containerd[1436]: time="2025-01-17T12:21:11.990156092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:11.990296 containerd[1436]: time="2025-01-17T12:21:11.990207652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:11.990340 containerd[1436]: time="2025-01-17T12:21:11.990309852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:12.016041 systemd[1]: Started cri-containerd-08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332.scope - libcontainer container 08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332. Jan 17 12:21:12.026742 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:21:12.043238 containerd[1436]: time="2025-01-17T12:21:12.043204378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d9bbdf4b-9kfjl,Uid:555679b9-091d-4e97-ba7f-22385a5f0537,Namespace:calico-system,Attempt:0,} returns sandbox id \"08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332\"" Jan 17 12:21:12.052853 containerd[1436]: time="2025-01-17T12:21:12.052809055Z" level=info msg="CreateContainer within sandbox \"08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:21:12.061449 containerd[1436]: time="2025-01-17T12:21:12.061364848Z" level=info msg="CreateContainer within sandbox \"08911888e2ce57aaea14586a5c135e132f8861071928ff3f757dbda8ac525332\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"298a5a033057f332936eaae54d8547260d7c61eda2b1aca02f790eb003344a74\"" Jan 17 12:21:12.063316 containerd[1436]: time="2025-01-17T12:21:12.062873494Z" level=info msg="StartContainer for \"298a5a033057f332936eaae54d8547260d7c61eda2b1aca02f790eb003344a74\"" Jan 17 12:21:12.092015 systemd[1]: Started cri-containerd-298a5a033057f332936eaae54d8547260d7c61eda2b1aca02f790eb003344a74.scope - libcontainer container 298a5a033057f332936eaae54d8547260d7c61eda2b1aca02f790eb003344a74. Jan 17 12:21:12.123546 containerd[1436]: time="2025-01-17T12:21:12.123433128Z" level=info msg="StartContainer for \"298a5a033057f332936eaae54d8547260d7c61eda2b1aca02f790eb003344a74\" returns successfully" Jan 17 12:21:12.498668 kubelet[2553]: I0117 12:21:12.498610 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59d9bbdf4b-9kfjl" podStartSLOduration=1.498592497 podStartE2EDuration="1.498592497s" podCreationTimestamp="2025-01-17 12:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:12.495752606 +0000 UTC m=+81.341813814" watchObservedRunningTime="2025-01-17 12:21:12.498592497 +0000 UTC m=+81.344653745" Jan 17 12:21:13.236518 kubelet[2553]: I0117 12:21:13.236471 2553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d30ae5-d85e-4d42-ab11-9579ef57e019" path="/var/lib/kubelet/pods/29d30ae5-d85e-4d42-ab11-9579ef57e019/volumes"