Jan 29 12:15:48.920007 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 12:15:48.920030 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 12:15:48.920040 kernel: KASLR enabled Jan 29 12:15:48.920046 kernel: efi: EFI v2.7 by EDK II Jan 29 12:15:48.920052 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 29 12:15:48.920058 kernel: random: crng init done Jan 29 12:15:48.920065 kernel: ACPI: Early table checksum verification disabled Jan 29 12:15:48.920072 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 29 12:15:48.920078 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 12:15:48.920086 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920092 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920099 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920105 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920111 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920119 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920127 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920134 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920140 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:48.920147 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 12:15:48.920153 kernel: NUMA: Failed to initialise from firmware Jan 29 12:15:48.920160 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:15:48.920166 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 12:15:48.920172 kernel: Zone ranges: Jan 29 12:15:48.920179 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:15:48.920185 kernel: DMA32 empty Jan 29 12:15:48.920193 kernel: Normal empty Jan 29 12:15:48.920199 kernel: Movable zone start for each node Jan 29 12:15:48.920206 kernel: Early memory node ranges Jan 29 12:15:48.920212 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 12:15:48.920219 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 12:15:48.920225 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 12:15:48.920232 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 12:15:48.920238 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 12:15:48.920245 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 12:15:48.920251 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 12:15:48.920258 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:15:48.920264 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 12:15:48.920272 kernel: psci: probing for conduit method from ACPI. Jan 29 12:15:48.920278 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 12:15:48.920285 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 12:15:48.920294 kernel: psci: Trusted OS migration not required Jan 29 12:15:48.920301 kernel: psci: SMC Calling Convention v1.1 Jan 29 12:15:48.920308 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 12:15:48.920317 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 12:15:48.920324 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 12:15:48.920331 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 12:15:48.920337 kernel: Detected PIPT I-cache on CPU0 Jan 29 12:15:48.920344 kernel: CPU features: detected: GIC system register CPU interface Jan 29 12:15:48.920351 kernel: CPU features: detected: Hardware dirty bit management Jan 29 12:15:48.920358 kernel: CPU features: detected: Spectre-v4 Jan 29 12:15:48.920365 kernel: CPU features: detected: Spectre-BHB Jan 29 12:15:48.920372 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 12:15:48.920379 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 12:15:48.920387 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 12:15:48.920394 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 12:15:48.920400 kernel: alternatives: applying boot alternatives Jan 29 12:15:48.920408 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:15:48.920416 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:15:48.920423 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:15:48.920430 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:15:48.920437 kernel: Fallback order for Node 0: 0 Jan 29 12:15:48.920443 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 12:15:48.920450 kernel: Policy zone: DMA Jan 29 12:15:48.920457 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:15:48.920465 kernel: software IO TLB: area num 4. Jan 29 12:15:48.920473 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 12:15:48.920480 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 29 12:15:48.920487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 12:15:48.920494 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:15:48.920501 kernel: rcu: RCU event tracing is enabled. Jan 29 12:15:48.920509 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 12:15:48.920516 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:15:48.920523 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:15:48.920530 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:15:48.920537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 12:15:48.920544 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 12:15:48.920552 kernel: GICv3: 256 SPIs implemented Jan 29 12:15:48.920559 kernel: GICv3: 0 Extended SPIs implemented Jan 29 12:15:48.920566 kernel: Root IRQ handler: gic_handle_irq Jan 29 12:15:48.920583 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 12:15:48.920590 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 12:15:48.920597 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 12:15:48.920604 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 12:15:48.920611 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 12:15:48.920618 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 12:15:48.920625 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 12:15:48.920632 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:15:48.920642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:48.920649 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 12:15:48.920656 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 12:15:48.920664 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 12:15:48.920671 kernel: arm-pv: using stolen time PV Jan 29 12:15:48.920678 kernel: Console: colour dummy device 80x25 Jan 29 12:15:48.920685 kernel: ACPI: Core revision 20230628 Jan 29 12:15:48.920703 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 12:15:48.920710 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:15:48.920717 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:15:48.920727 kernel: landlock: Up and running. Jan 29 12:15:48.920734 kernel: SELinux: Initializing. Jan 29 12:15:48.920741 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:15:48.920748 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:15:48.920755 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:15:48.920762 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:15:48.920769 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:15:48.920777 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:15:48.920784 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 12:15:48.920792 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 12:15:48.920800 kernel: Remapping and enabling EFI services. Jan 29 12:15:48.920806 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:15:48.920814 kernel: Detected PIPT I-cache on CPU1 Jan 29 12:15:48.920821 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 12:15:48.920828 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 12:15:48.920835 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:48.920842 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 12:15:48.920850 kernel: Detected PIPT I-cache on CPU2 Jan 29 12:15:48.920857 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 12:15:48.920865 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 12:15:48.920873 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:48.920885 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 12:15:48.920893 kernel: Detected PIPT I-cache on CPU3 Jan 29 12:15:48.920901 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 12:15:48.920908 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 12:15:48.920916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:48.920923 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 12:15:48.920930 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 12:15:48.920939 kernel: SMP: Total of 4 processors activated. Jan 29 12:15:48.920947 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 12:15:48.920954 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 12:15:48.920962 kernel: CPU features: detected: Common not Private translations Jan 29 12:15:48.920969 kernel: CPU features: detected: CRC32 instructions Jan 29 12:15:48.920977 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 12:15:48.920984 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 12:15:48.920991 kernel: CPU features: detected: LSE atomic instructions Jan 29 12:15:48.921000 kernel: CPU features: detected: Privileged Access Never Jan 29 12:15:48.921008 kernel: CPU features: detected: RAS Extension Support Jan 29 12:15:48.921015 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 12:15:48.921023 kernel: CPU: All CPU(s) started at EL1 Jan 29 12:15:48.921030 kernel: alternatives: applying system-wide alternatives Jan 29 12:15:48.921037 kernel: devtmpfs: initialized Jan 29 12:15:48.921045 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:15:48.921053 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 12:15:48.921060 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:15:48.921069 kernel: SMBIOS 3.0.0 present. Jan 29 12:15:48.921076 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 29 12:15:48.921084 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:15:48.921091 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 12:15:48.921099 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 12:15:48.921106 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 12:15:48.921114 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:15:48.921121 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 29 12:15:48.921129 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:15:48.921138 kernel: cpuidle: using governor menu Jan 29 12:15:48.921145 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 12:15:48.921153 kernel: ASID allocator initialised with 32768 entries Jan 29 12:15:48.921160 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:15:48.921167 kernel: Serial: AMBA PL011 UART driver Jan 29 12:15:48.921175 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 12:15:48.921182 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 12:15:48.921189 kernel: Modules: 509040 pages in range for PLT usage Jan 29 12:15:48.921197 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:15:48.921206 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:15:48.921214 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 12:15:48.921222 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 12:15:48.921229 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:15:48.921237 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:15:48.921244 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 12:15:48.921252 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 12:15:48.921259 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:15:48.921267 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:15:48.921275 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:15:48.921283 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:15:48.921290 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:15:48.921298 kernel: ACPI: Interpreter enabled Jan 29 12:15:48.921305 kernel: ACPI: Using GIC for interrupt routing Jan 29 12:15:48.921313 kernel: ACPI: MCFG table detected, 1 entries Jan 29 12:15:48.921320 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 12:15:48.921328 kernel: printk: console [ttyAMA0] enabled Jan 29 12:15:48.921336 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:15:48.921477 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:15:48.921551 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 12:15:48.921628 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 12:15:48.921717 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 12:15:48.921787 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 12:15:48.921797 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 12:15:48.921805 kernel: PCI host bridge to bus 0000:00 Jan 29 12:15:48.921884 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 12:15:48.921947 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 12:15:48.922007 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 12:15:48.922066 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:15:48.922147 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 12:15:48.922224 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:15:48.922296 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 12:15:48.922363 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 12:15:48.922430 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:15:48.922499 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:15:48.922567 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 12:15:48.922649 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 12:15:48.922723 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 12:15:48.922788 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 12:15:48.922848 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 12:15:48.922858 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 12:15:48.922866 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 12:15:48.922874 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 12:15:48.922882 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 12:15:48.922889 kernel: iommu: Default domain type: Translated Jan 29 12:15:48.922897 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 12:15:48.922906 kernel: efivars: Registered efivars operations Jan 29 12:15:48.922913 kernel: vgaarb: loaded Jan 29 12:15:48.922921 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 12:15:48.922928 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:15:48.922936 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:15:48.922943 kernel: pnp: PnP ACPI init Jan 29 12:15:48.923022 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 12:15:48.923034 kernel: pnp: PnP ACPI: found 1 devices Jan 29 12:15:48.923041 kernel: NET: Registered PF_INET protocol family Jan 29 12:15:48.923051 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:15:48.923059 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:15:48.923066 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:15:48.923074 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:15:48.923082 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:15:48.923090 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:15:48.923097 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:15:48.923105 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:15:48.923114 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:15:48.923121 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:15:48.923129 kernel: kvm [1]: HYP mode not available Jan 29 12:15:48.923137 kernel: Initialise system trusted keyrings Jan 29 12:15:48.923144 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:15:48.923152 kernel: Key type asymmetric registered Jan 29 12:15:48.923159 kernel: Asymmetric key parser 'x509' registered Jan 29 12:15:48.923166 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 12:15:48.923174 kernel: io scheduler mq-deadline registered Jan 29 12:15:48.923181 kernel: io scheduler kyber registered Jan 29 12:15:48.923191 kernel: io scheduler bfq registered Jan 29 12:15:48.923198 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 12:15:48.923206 kernel: ACPI: button: Power Button [PWRB] Jan 29 12:15:48.923214 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 12:15:48.923282 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 12:15:48.923293 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:15:48.923300 kernel: thunder_xcv, ver 1.0 Jan 29 12:15:48.923308 kernel: thunder_bgx, ver 1.0 Jan 29 12:15:48.923315 kernel: nicpf, ver 1.0 Jan 29 12:15:48.923324 kernel: nicvf, ver 1.0 Jan 29 12:15:48.923398 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 12:15:48.923462 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T12:15:48 UTC (1738152948) Jan 29 12:15:48.923473 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:15:48.923480 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 12:15:48.923488 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 12:15:48.923495 kernel: watchdog: Hard watchdog permanently disabled Jan 29 12:15:48.923503 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:15:48.923512 kernel: Segment Routing with IPv6 Jan 29 12:15:48.923520 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:15:48.923527 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:15:48.923534 kernel: Key type dns_resolver registered Jan 29 12:15:48.923542 kernel: registered taskstats version 1 Jan 29 12:15:48.923549 kernel: Loading compiled-in X.509 certificates Jan 29 12:15:48.923557 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 12:15:48.923564 kernel: Key type .fscrypt registered Jan 29 12:15:48.923578 kernel: Key type fscrypt-provisioning registered Jan 29 12:15:48.923587 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:15:48.923595 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:15:48.923602 kernel: ima: No architecture policies found Jan 29 12:15:48.923610 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 12:15:48.923617 kernel: clk: Disabling unused clocks Jan 29 12:15:48.923625 kernel: Freeing unused kernel memory: 39360K Jan 29 12:15:48.923632 kernel: Run /init as init process Jan 29 12:15:48.923639 kernel: with arguments: Jan 29 12:15:48.923646 kernel: /init Jan 29 12:15:48.923655 kernel: with environment: Jan 29 12:15:48.923663 kernel: HOME=/ Jan 29 12:15:48.923670 kernel: TERM=linux Jan 29 12:15:48.923677 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:15:48.923686 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:15:48.923705 systemd[1]: Detected virtualization kvm. Jan 29 12:15:48.923713 systemd[1]: Detected architecture arm64. Jan 29 12:15:48.923723 systemd[1]: Running in initrd. Jan 29 12:15:48.923731 systemd[1]: No hostname configured, using default hostname. Jan 29 12:15:48.923739 systemd[1]: Hostname set to . Jan 29 12:15:48.923747 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:15:48.923755 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:15:48.923763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:15:48.923771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:15:48.923780 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:15:48.923790 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:15:48.923798 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:15:48.923806 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:15:48.923816 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:15:48.923824 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:15:48.923832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:15:48.923841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:15:48.923850 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:15:48.923858 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:15:48.923866 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:15:48.923874 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:15:48.923882 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:15:48.923890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:15:48.923898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:15:48.923907 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:15:48.923915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:15:48.923924 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:15:48.923932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:15:48.923940 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:15:48.923948 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:15:48.923957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:15:48.923965 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:15:48.923973 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:15:48.923981 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:15:48.923990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:15:48.923998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:48.924006 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:15:48.924014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:15:48.924022 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:15:48.924030 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:15:48.924040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:48.924048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:15:48.924074 systemd-journald[238]: Collecting audit messages is disabled. Jan 29 12:15:48.924094 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:15:48.924103 systemd-journald[238]: Journal started Jan 29 12:15:48.924123 systemd-journald[238]: Runtime Journal (/run/log/journal/ec60f07fd7054e0a9af6ed6ca10d023f) is 5.9M, max 47.3M, 41.4M free. Jan 29 12:15:48.932055 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:15:48.932078 kernel: Bridge firewalling registered Jan 29 12:15:48.914334 systemd-modules-load[239]: Inserted module 'overlay' Jan 29 12:15:48.933815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:15:48.928164 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 29 12:15:48.936750 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:15:48.936919 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:15:48.942868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:48.944866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:15:48.947453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:15:48.951344 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:48.955834 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:15:48.956818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:48.957793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:15:48.961425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:15:48.971772 dracut-cmdline[272]: dracut-dracut-053 Jan 29 12:15:48.975015 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:15:48.992342 systemd-resolved[277]: Positive Trust Anchors: Jan 29 12:15:48.992361 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:15:48.992393 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:15:48.999212 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 29 12:15:49.000234 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:15:49.001855 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:15:49.048718 kernel: SCSI subsystem initialized Jan 29 12:15:49.051713 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:15:49.059730 kernel: iscsi: registered transport (tcp) Jan 29 12:15:49.072719 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:15:49.072747 kernel: QLogic iSCSI HBA Driver Jan 29 12:15:49.119619 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:15:49.131876 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:15:49.148601 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:15:49.148655 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:15:49.149809 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:15:49.199737 kernel: raid6: neonx8 gen() 15114 MB/s Jan 29 12:15:49.216718 kernel: raid6: neonx4 gen() 15166 MB/s Jan 29 12:15:49.233710 kernel: raid6: neonx2 gen() 12974 MB/s Jan 29 12:15:49.250717 kernel: raid6: neonx1 gen() 9730 MB/s Jan 29 12:15:49.267711 kernel: raid6: int64x8 gen() 6962 MB/s Jan 29 12:15:49.284717 kernel: raid6: int64x4 gen() 7350 MB/s Jan 29 12:15:49.301707 kernel: raid6: int64x2 gen() 6121 MB/s Jan 29 12:15:49.318711 kernel: raid6: int64x1 gen() 5059 MB/s Jan 29 12:15:49.318731 kernel: raid6: using algorithm neonx4 gen() 15166 MB/s Jan 29 12:15:49.335714 kernel: raid6: .... xor() 12322 MB/s, rmw enabled Jan 29 12:15:49.335728 kernel: raid6: using neon recovery algorithm Jan 29 12:15:49.340811 kernel: xor: measuring software checksum speed Jan 29 12:15:49.340836 kernel: 8regs : 19754 MB/sec Jan 29 12:15:49.341893 kernel: 32regs : 19200 MB/sec Jan 29 12:15:49.341912 kernel: arm64_neon : 26945 MB/sec Jan 29 12:15:49.341921 kernel: xor: using function: arm64_neon (26945 MB/sec) Jan 29 12:15:49.394730 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:15:49.405881 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:15:49.413867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:15:49.427089 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 29 12:15:49.430223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:15:49.444900 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:15:49.456304 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 29 12:15:49.483957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:15:49.491853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:15:49.547042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:15:49.555947 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:15:49.568916 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:15:49.570526 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:15:49.574628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:15:49.575719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:15:49.581716 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 12:15:49.587927 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 12:15:49.588030 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:15:49.588042 kernel: GPT:9289727 != 19775487 Jan 29 12:15:49.588051 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:15:49.588060 kernel: GPT:9289727 != 19775487 Jan 29 12:15:49.588076 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:15:49.588085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:15:49.590858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:15:49.597417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:15:49.597526 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:49.600718 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:15:49.601605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:15:49.601756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:49.603710 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:49.609711 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) Jan 29 12:15:49.611550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:49.614316 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (508) Jan 29 12:15:49.615753 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:15:49.623906 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:15:49.625829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:49.637617 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:15:49.641924 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:15:49.645482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:15:49.646395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:15:49.654895 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:15:49.656395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:15:49.663924 disk-uuid[549]: Primary Header is updated. Jan 29 12:15:49.663924 disk-uuid[549]: Secondary Entries is updated. Jan 29 12:15:49.663924 disk-uuid[549]: Secondary Header is updated. Jan 29 12:15:49.667723 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:15:49.674076 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:50.680710 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:15:50.683180 disk-uuid[552]: The operation has completed successfully. Jan 29 12:15:50.708273 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:15:50.708365 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:15:50.718862 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:15:50.721482 sh[571]: Success Jan 29 12:15:50.733583 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 12:15:50.761037 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:15:50.783037 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:15:50.784921 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:15:50.794937 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 12:15:50.794976 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:50.794987 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:15:50.796264 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:15:50.796277 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:15:50.800172 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:15:50.801278 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:15:50.812840 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:15:50.814435 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:15:50.821105 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:50.821147 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:50.821158 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:15:50.824505 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:15:50.829991 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:15:50.831719 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:50.837011 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:15:50.842850 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:15:50.903236 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:15:50.913303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:15:50.942804 systemd-networkd[763]: lo: Link UP Jan 29 12:15:50.942812 systemd-networkd[763]: lo: Gained carrier Jan 29 12:15:50.943443 systemd-networkd[763]: Enumeration completed Jan 29 12:15:50.943761 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:15:50.947476 ignition[662]: Ignition 2.19.0 Jan 29 12:15:50.944604 systemd[1]: Reached target network.target - Network. Jan 29 12:15:50.947482 ignition[662]: Stage: fetch-offline Jan 29 12:15:50.946332 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:50.947513 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:50.946335 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:15:50.947521 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:50.946981 systemd-networkd[763]: eth0: Link UP Jan 29 12:15:50.947677 ignition[662]: parsed url from cmdline: "" Jan 29 12:15:50.946984 systemd-networkd[763]: eth0: Gained carrier Jan 29 12:15:50.947680 ignition[662]: no config URL provided Jan 29 12:15:50.946990 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:50.947685 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:15:50.947704 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:15:50.947724 ignition[662]: op(1): [started] loading QEMU firmware config module Jan 29 12:15:50.959747 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:15:50.947729 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 12:15:50.959255 ignition[662]: op(1): [finished] loading QEMU firmware config module Jan 29 12:15:50.959277 ignition[662]: QEMU firmware config was not found. Ignoring... Jan 29 12:15:50.999746 ignition[662]: parsing config with SHA512: b9e60530a83f8cd0cc6ccd9894d669f7c8ff38e538328161a24e7e5682fd6cc50ee160fce01a766c474a23920537178efdaed5293001ea19e971da3e9ce57221 Jan 29 12:15:51.004458 unknown[662]: fetched base config from "system" Jan 29 12:15:51.004467 unknown[662]: fetched user config from "qemu" Jan 29 12:15:51.005257 ignition[662]: fetch-offline: fetch-offline passed Jan 29 12:15:51.005431 ignition[662]: Ignition finished successfully Jan 29 12:15:51.009727 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:15:51.010754 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:15:51.020880 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:15:51.031133 ignition[771]: Ignition 2.19.0 Jan 29 12:15:51.031143 ignition[771]: Stage: kargs Jan 29 12:15:51.031293 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:51.031303 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:51.032273 ignition[771]: kargs: kargs passed Jan 29 12:15:51.032317 ignition[771]: Ignition finished successfully Jan 29 12:15:51.035638 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:15:51.043915 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:15:51.052572 ignition[780]: Ignition 2.19.0 Jan 29 12:15:51.052582 ignition[780]: Stage: disks Jan 29 12:15:51.052779 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:51.052788 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:51.056426 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:15:51.053667 ignition[780]: disks: disks passed Jan 29 12:15:51.057309 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:15:51.053721 ignition[780]: Ignition finished successfully Jan 29 12:15:51.058128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:15:51.059527 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:15:51.060555 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:15:51.061977 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:15:51.067888 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:15:51.076602 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:15:51.080122 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:15:51.082108 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:15:51.127470 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:15:51.128620 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 12:15:51.128483 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:15:51.136781 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:15:51.138159 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:15:51.139339 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:15:51.139373 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:15:51.144317 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jan 29 12:15:51.139392 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:15:51.147752 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:51.147821 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:51.147862 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:15:51.143325 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:15:51.147569 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:15:51.150724 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:15:51.151233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:15:51.185571 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:15:51.189726 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:15:51.192648 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:15:51.196451 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:15:51.261974 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:15:51.273813 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:15:51.275129 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:15:51.279738 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:51.293063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:15:51.296286 ignition[912]: INFO : Ignition 2.19.0 Jan 29 12:15:51.297717 ignition[912]: INFO : Stage: mount Jan 29 12:15:51.297717 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:51.297717 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:51.299645 ignition[912]: INFO : mount: mount passed Jan 29 12:15:51.299645 ignition[912]: INFO : Ignition finished successfully Jan 29 12:15:51.299315 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:15:51.306826 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:15:51.794054 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:15:51.803852 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:15:51.809744 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Jan 29 12:15:51.809775 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:51.809786 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:51.810845 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:15:51.812707 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:15:51.813880 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:15:51.829922 ignition[944]: INFO : Ignition 2.19.0 Jan 29 12:15:51.829922 ignition[944]: INFO : Stage: files Jan 29 12:15:51.831124 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:51.831124 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:51.831124 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:15:51.833644 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:15:51.833644 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:15:51.833644 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:15:51.836684 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:15:51.836684 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:15:51.836684 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:15:51.836684 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:15:51.836684 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:15:51.836684 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 12:15:51.834093 unknown[944]: wrote ssh authorized keys file for user: core Jan 29 12:15:51.909022 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:15:52.890045 systemd-networkd[763]: eth0: Gained IPv6LL Jan 29 12:15:53.257002 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:15:53.257002 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:53.259893 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 12:15:53.516167 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:15:53.705492 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:53.705492 ignition[944]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 29 12:15:53.708194 ignition[944]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 12:15:53.728798 ignition[944]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:15:53.732051 ignition[944]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:15:53.733131 ignition[944]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 12:15:53.733131 ignition[944]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:15:53.733131 ignition[944]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:15:53.733131 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:15:53.733131 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:15:53.733131 ignition[944]: INFO : files: files passed Jan 29 12:15:53.733131 ignition[944]: INFO : Ignition finished successfully Jan 29 12:15:53.734725 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:15:53.741883 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:15:53.745035 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:15:53.747832 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:15:53.747926 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:15:53.751109 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 12:15:53.753568 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:15:53.753568 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:15:53.756289 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:15:53.756224 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:15:53.757498 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:15:53.766802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:15:53.784820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:15:53.785549 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:15:53.787168 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:15:53.787973 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:15:53.789260 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:15:53.789890 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:15:53.805733 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:15:53.813825 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:15:53.821958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:15:53.822853 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:15:53.824324 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:15:53.825569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:15:53.825672 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:15:53.827476 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:15:53.828916 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:15:53.830172 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:15:53.831475 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:15:53.832854 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:15:53.834253 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:15:53.835569 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:15:53.837078 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:15:53.838439 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:15:53.839667 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:15:53.840777 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:15:53.840878 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:15:53.842597 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:15:53.843966 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:15:53.845350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:15:53.848773 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:15:53.849655 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:15:53.849772 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:15:53.851851 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:15:53.851957 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:15:53.853382 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:15:53.854488 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:15:53.857749 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:15:53.858824 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:15:53.860427 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:15:53.861557 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:15:53.861645 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:15:53.862797 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:15:53.862873 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:15:53.864066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:15:53.864166 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:15:53.865421 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:15:53.865519 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:15:53.877843 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:15:53.879112 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:15:53.879740 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:15:53.879848 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:15:53.881159 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:15:53.881242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:15:53.885941 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:15:53.886037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:15:53.890197 ignition[999]: INFO : Ignition 2.19.0 Jan 29 12:15:53.890197 ignition[999]: INFO : Stage: umount Jan 29 12:15:53.891975 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:53.891975 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:53.891975 ignition[999]: INFO : umount: umount passed Jan 29 12:15:53.891975 ignition[999]: INFO : Ignition finished successfully Jan 29 12:15:53.893482 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:15:53.893957 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:15:53.894041 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:15:53.899372 systemd[1]: Stopped target network.target - Network. Jan 29 12:15:53.900272 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:15:53.900340 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:15:53.901728 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:15:53.901770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:15:53.903007 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:15:53.903045 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:15:53.904327 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:15:53.904364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:15:53.906602 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:15:53.908075 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:15:53.916926 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 29 12:15:53.917935 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:15:53.918048 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:15:53.920151 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:15:53.920379 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:15:53.922319 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:15:53.922375 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:15:53.933827 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:15:53.934517 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:15:53.934573 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:15:53.936185 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:15:53.936224 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:53.937556 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:15:53.937596 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:15:53.939102 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:15:53.939139 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:15:53.940672 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:15:53.950253 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:15:53.951594 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:15:53.954368 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:15:53.954509 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:15:53.956386 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:15:53.956492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:15:53.957874 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:15:53.957906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:15:53.959505 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:15:53.959562 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:15:53.961889 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:15:53.961931 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:15:53.964223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:15:53.964262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:53.980825 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:15:53.981564 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:15:53.981613 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:15:53.983325 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:15:53.983362 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:15:53.984865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:15:53.984902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:15:53.986516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:15:53.986559 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:53.988635 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:15:53.988725 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:15:53.990199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:15:53.990260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:15:53.992152 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:15:53.993132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:15:53.993186 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:15:53.995274 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:15:54.003564 systemd[1]: Switching root. Jan 29 12:15:54.030193 systemd-journald[238]: Journal stopped Jan 29 12:15:54.717894 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 29 12:15:54.717957 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:15:54.717972 kernel: SELinux: policy capability open_perms=1 Jan 29 12:15:54.717982 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:15:54.717994 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:15:54.718003 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:15:54.718013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:15:54.718022 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:15:54.718031 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:15:54.718041 kernel: audit: type=1403 audit(1738152954.207:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:15:54.718056 systemd[1]: Successfully loaded SELinux policy in 35.199ms. Jan 29 12:15:54.718074 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.774ms. Jan 29 12:15:54.718085 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:15:54.718096 systemd[1]: Detected virtualization kvm. Jan 29 12:15:54.718107 systemd[1]: Detected architecture arm64. Jan 29 12:15:54.718117 systemd[1]: Detected first boot. Jan 29 12:15:54.718127 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:15:54.718137 zram_generator::config[1065]: No configuration found. Jan 29 12:15:54.718153 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:15:54.718165 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:15:54.718175 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:15:54.718186 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:15:54.718197 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:15:54.718207 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:15:54.718218 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:15:54.718229 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:15:54.718240 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:15:54.718250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:15:54.718262 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:15:54.718273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:15:54.718283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:15:54.718294 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:15:54.718304 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:15:54.718314 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:15:54.718325 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:15:54.718335 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 12:15:54.718346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:15:54.718358 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:15:54.718368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:15:54.718379 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:15:54.718389 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:15:54.718400 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:15:54.718410 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:15:54.718420 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:15:54.718431 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:15:54.718444 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:15:54.718456 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:15:54.718467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:15:54.718477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:15:54.718487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:15:54.718498 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:15:54.718508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:15:54.718525 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:15:54.718536 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:15:54.718549 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:15:54.718559 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:15:54.718569 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:15:54.718580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:15:54.718590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:15:54.718601 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:15:54.718611 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:15:54.718621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:15:54.718631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:15:54.718644 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:15:54.718654 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:15:54.718665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:15:54.718676 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:15:54.718688 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:15:54.718721 kernel: fuse: init (API version 7.39) Jan 29 12:15:54.718733 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:15:54.718743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:15:54.718757 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:15:54.718768 kernel: ACPI: bus type drm_connector registered Jan 29 12:15:54.718777 kernel: loop: module loaded Jan 29 12:15:54.718787 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:15:54.718797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:15:54.718809 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:15:54.718839 systemd-journald[1151]: Collecting audit messages is disabled. Jan 29 12:15:54.718860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:15:54.718874 systemd-journald[1151]: Journal started Jan 29 12:15:54.718895 systemd-journald[1151]: Runtime Journal (/run/log/journal/ec60f07fd7054e0a9af6ed6ca10d023f) is 5.9M, max 47.3M, 41.4M free. Jan 29 12:15:54.721715 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:15:54.722279 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:15:54.723082 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:15:54.723969 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:15:54.724853 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:15:54.725825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:15:54.726922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:15:54.728022 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:15:54.728178 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:15:54.729243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:15:54.729395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:15:54.730482 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:15:54.730644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:15:54.731659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:15:54.731814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:15:54.732874 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:15:54.733016 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:15:54.734205 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:15:54.734410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:15:54.735602 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:15:54.736776 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:15:54.737905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:15:54.748301 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:15:54.757799 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:15:54.759474 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:15:54.760314 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:15:54.762370 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:15:54.764893 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:15:54.765847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:15:54.769820 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:15:54.770615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:15:54.771645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:54.773854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:15:54.779141 systemd-journald[1151]: Time spent on flushing to /var/log/journal/ec60f07fd7054e0a9af6ed6ca10d023f is 12.176ms for 846 entries. Jan 29 12:15:54.779141 systemd-journald[1151]: System Journal (/var/log/journal/ec60f07fd7054e0a9af6ed6ca10d023f) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:15:54.797159 systemd-journald[1151]: Received client request to flush runtime journal. Jan 29 12:15:54.776118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:15:54.777159 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:15:54.778104 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:15:54.783172 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:15:54.785191 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:15:54.794903 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:15:54.799297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:54.803252 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:15:54.805033 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 29 12:15:54.805054 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 29 12:15:54.806799 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:15:54.811303 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:15:54.818845 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:15:54.836321 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:15:54.841896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:15:54.852919 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 29 12:15:54.852935 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 29 12:15:54.856330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:15:55.190975 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:15:55.203970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:15:55.223343 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Jan 29 12:15:55.237880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:15:55.243830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:15:55.256835 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:15:55.266756 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 29 12:15:55.277715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1242) Jan 29 12:15:55.313659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:15:55.314978 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:15:55.359553 systemd-networkd[1229]: lo: Link UP Jan 29 12:15:55.359728 systemd-networkd[1229]: lo: Gained carrier Jan 29 12:15:55.360379 systemd-networkd[1229]: Enumeration completed Jan 29 12:15:55.360934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:55.361250 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:55.361305 systemd-networkd[1229]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:15:55.361866 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:15:55.362083 systemd-networkd[1229]: eth0: Link UP Jan 29 12:15:55.362133 systemd-networkd[1229]: eth0: Gained carrier Jan 29 12:15:55.362203 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:55.364291 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:15:55.369815 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:15:55.372765 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:15:55.377789 systemd-networkd[1229]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:15:55.387453 lvm[1262]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:15:55.397309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:55.416004 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:15:55.417128 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:15:55.429875 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:15:55.432921 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:15:55.465016 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:15:55.466112 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:15:55.467036 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:15:55.467068 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:15:55.467806 systemd[1]: Reached target machines.target - Containers. Jan 29 12:15:55.469462 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:15:55.481846 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:15:55.483801 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:15:55.484627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:15:55.485519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:15:55.487604 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:15:55.491177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:15:55.492860 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:15:55.501094 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:15:55.504462 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:15:55.505964 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:15:55.507793 kernel: loop0: detected capacity change from 0 to 114432 Jan 29 12:15:55.521728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:15:55.553733 kernel: loop1: detected capacity change from 0 to 194096 Jan 29 12:15:55.598726 kernel: loop2: detected capacity change from 0 to 114328 Jan 29 12:15:55.631718 kernel: loop3: detected capacity change from 0 to 114432 Jan 29 12:15:55.639717 kernel: loop4: detected capacity change from 0 to 194096 Jan 29 12:15:55.651712 kernel: loop5: detected capacity change from 0 to 114328 Jan 29 12:15:55.657126 (sd-merge)[1289]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 12:15:55.658380 (sd-merge)[1289]: Merged extensions into '/usr'. Jan 29 12:15:55.663957 systemd[1]: Reloading requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:15:55.663970 systemd[1]: Reloading... Jan 29 12:15:55.699725 zram_generator::config[1317]: No configuration found. Jan 29 12:15:55.740243 ldconfig[1273]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:15:55.795132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:15:55.837194 systemd[1]: Reloading finished in 172 ms. Jan 29 12:15:55.850221 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:15:55.851354 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:15:55.870885 systemd[1]: Starting ensure-sysext.service... Jan 29 12:15:55.872515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:15:55.877312 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:15:55.877326 systemd[1]: Reloading... Jan 29 12:15:55.886862 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:15:55.887107 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:15:55.887749 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:15:55.887963 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 29 12:15:55.888011 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 29 12:15:55.890389 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:15:55.890399 systemd-tmpfiles[1360]: Skipping /boot Jan 29 12:15:55.897030 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:15:55.897045 systemd-tmpfiles[1360]: Skipping /boot Jan 29 12:15:55.917715 zram_generator::config[1392]: No configuration found. Jan 29 12:15:56.000872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:15:56.043296 systemd[1]: Reloading finished in 165 ms. Jan 29 12:15:56.057219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:15:56.079283 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:15:56.081361 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:15:56.083252 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:15:56.087856 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:15:56.089608 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:15:56.094884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:15:56.095827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:15:56.100952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:15:56.105923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:15:56.107031 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:15:56.107667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:15:56.107827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:15:56.111021 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:15:56.111159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:15:56.118994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:15:56.119188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:15:56.121015 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:15:56.124159 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:15:56.129615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:15:56.135943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:15:56.137919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:15:56.141670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:15:56.144543 augenrules[1467]: No rules Jan 29 12:15:56.144562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:15:56.146208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:15:56.147960 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:15:56.151786 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:15:56.153913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:15:56.154058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:15:56.156013 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:15:56.157337 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:15:56.157470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:15:56.158849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:15:56.158978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:15:56.160371 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:15:56.160573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:15:56.164308 systemd[1]: Finished ensure-sysext.service. Jan 29 12:15:56.167750 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:15:56.171399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:15:56.171498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:15:56.176845 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:15:56.177717 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:15:56.181171 systemd-resolved[1435]: Positive Trust Anchors: Jan 29 12:15:56.181193 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:15:56.181225 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:15:56.186895 systemd-resolved[1435]: Defaulting to hostname 'linux'. Jan 29 12:15:56.194006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:15:56.194870 systemd[1]: Reached target network.target - Network. Jan 29 12:15:56.195493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:15:56.218170 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:15:56.622927 systemd-resolved[1435]: Clock change detected. Flushing caches. Jan 29 12:15:56.622972 systemd-timesyncd[1493]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 12:15:56.623014 systemd-timesyncd[1493]: Initial clock synchronization to Wed 2025-01-29 12:15:56.622888 UTC. Jan 29 12:15:56.623542 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:15:56.624407 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:15:56.625389 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:15:56.626303 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:15:56.627199 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:15:56.627231 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:15:56.627882 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:15:56.628718 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:15:56.629629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:15:56.630597 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:15:56.631914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:15:56.633940 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:15:56.635724 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:15:56.644647 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:15:56.645467 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:15:56.646197 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:15:56.646993 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:15:56.647040 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:15:56.647058 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:15:56.648053 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:15:56.649748 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:15:56.651562 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:15:56.657196 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:15:56.658175 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:15:56.659233 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:15:56.663883 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:15:56.665802 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:15:56.666308 jq[1499]: false Jan 29 12:15:56.670013 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:15:56.675061 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:15:56.680285 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:15:56.681409 extend-filesystems[1501]: Found loop3 Jan 29 12:15:56.681409 extend-filesystems[1501]: Found loop4 Jan 29 12:15:56.681409 extend-filesystems[1501]: Found loop5 Jan 29 12:15:56.681409 extend-filesystems[1501]: Found vda Jan 29 12:15:56.681409 extend-filesystems[1501]: Found vda1 Jan 29 12:15:56.687693 extend-filesystems[1501]: Found vda2 Jan 29 12:15:56.687693 extend-filesystems[1501]: Found vda3 Jan 29 12:15:56.687693 extend-filesystems[1501]: Found usr Jan 29 12:15:56.687693 extend-filesystems[1501]: Found vda4 Jan 29 12:15:56.687693 extend-filesystems[1501]: Found vda6 Jan 29 12:15:56.687693 extend-filesystems[1501]: Found vda7 Jan 29 12:15:56.687693 extend-filesystems[1501]: Found vda9 Jan 29 12:15:56.687693 extend-filesystems[1501]: Checking size of /dev/vda9 Jan 29 12:15:56.682861 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:15:56.692464 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:15:56.701176 jq[1519]: true Jan 29 12:15:56.701358 extend-filesystems[1501]: Resized partition /dev/vda9 Jan 29 12:15:56.702162 extend-filesystems[1528]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:15:56.705142 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:15:56.705352 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:15:56.705576 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:15:56.705768 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:15:56.707320 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:15:56.707511 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:15:56.708837 dbus-daemon[1498]: [system] SELinux support is enabled Jan 29 12:15:56.711788 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 12:15:56.711915 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:15:56.723017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1231) Jan 29 12:15:56.729504 jq[1531]: true Jan 29 12:15:56.736766 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:15:56.742827 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 12:15:56.743387 update_engine[1515]: I20250129 12:15:56.742519 1515 main.cc:92] Flatcar Update Engine starting Jan 29 12:15:56.749561 tar[1529]: linux-arm64/helm Jan 29 12:15:56.753675 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 12:15:56.754492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:15:56.754525 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:15:56.756059 systemd-logind[1510]: New seat seat0. Jan 29 12:15:56.759035 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:15:56.759035 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:15:56.759035 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 12:15:56.769900 extend-filesystems[1501]: Resized filesystem in /dev/vda9 Jan 29 12:15:56.761116 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:15:56.771175 update_engine[1515]: I20250129 12:15:56.759790 1515 update_check_scheduler.cc:74] Next update check in 10m50s Jan 29 12:15:56.761136 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:15:56.766891 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:15:56.770784 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:15:56.771006 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:15:56.772844 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:15:56.776465 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:15:56.786878 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:15:56.797330 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:15:56.801709 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:15:56.804479 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:15:56.833838 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:15:56.936526 containerd[1532]: time="2025-01-29T12:15:56.936388395Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:15:56.968449 containerd[1532]: time="2025-01-29T12:15:56.968366235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:56.969753 containerd[1532]: time="2025-01-29T12:15:56.969709995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:56.969753 containerd[1532]: time="2025-01-29T12:15:56.969746675Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:15:56.969839 containerd[1532]: time="2025-01-29T12:15:56.969762635Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:15:56.969954 containerd[1532]: time="2025-01-29T12:15:56.969919675Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:15:56.969954 containerd[1532]: time="2025-01-29T12:15:56.969946835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970011 containerd[1532]: time="2025-01-29T12:15:56.969996955Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970038 containerd[1532]: time="2025-01-29T12:15:56.970012635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970231 containerd[1532]: time="2025-01-29T12:15:56.970210315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970267 containerd[1532]: time="2025-01-29T12:15:56.970232635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970267 containerd[1532]: time="2025-01-29T12:15:56.970245795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970267 containerd[1532]: time="2025-01-29T12:15:56.970254995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970344 containerd[1532]: time="2025-01-29T12:15:56.970328595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970532 containerd[1532]: time="2025-01-29T12:15:56.970514755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970668 containerd[1532]: time="2025-01-29T12:15:56.970649195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:56.970695 containerd[1532]: time="2025-01-29T12:15:56.970668795Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:15:56.970758 containerd[1532]: time="2025-01-29T12:15:56.970743635Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:15:56.970818 containerd[1532]: time="2025-01-29T12:15:56.970803635Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:15:56.974197 containerd[1532]: time="2025-01-29T12:15:56.974169275Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:15:56.974285 containerd[1532]: time="2025-01-29T12:15:56.974266155Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:15:56.974312 containerd[1532]: time="2025-01-29T12:15:56.974291275Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:15:56.974371 containerd[1532]: time="2025-01-29T12:15:56.974353755Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:15:56.974396 containerd[1532]: time="2025-01-29T12:15:56.974374555Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:15:56.974526 containerd[1532]: time="2025-01-29T12:15:56.974507275Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:15:56.974832 containerd[1532]: time="2025-01-29T12:15:56.974812835Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:15:56.974938 containerd[1532]: time="2025-01-29T12:15:56.974922435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:15:56.974961 containerd[1532]: time="2025-01-29T12:15:56.974943395Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:15:56.974961 containerd[1532]: time="2025-01-29T12:15:56.974957795Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:15:56.975006 containerd[1532]: time="2025-01-29T12:15:56.974971355Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975006 containerd[1532]: time="2025-01-29T12:15:56.974984595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975006 containerd[1532]: time="2025-01-29T12:15:56.974996475Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975063 containerd[1532]: time="2025-01-29T12:15:56.975009635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975063 containerd[1532]: time="2025-01-29T12:15:56.975023355Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975063 containerd[1532]: time="2025-01-29T12:15:56.975035675Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975063 containerd[1532]: time="2025-01-29T12:15:56.975047595Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975063 containerd[1532]: time="2025-01-29T12:15:56.975059355Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:15:56.975147 containerd[1532]: time="2025-01-29T12:15:56.975083675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975147 containerd[1532]: time="2025-01-29T12:15:56.975097115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975147 containerd[1532]: time="2025-01-29T12:15:56.975111435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975147 containerd[1532]: time="2025-01-29T12:15:56.975125675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975147 containerd[1532]: time="2025-01-29T12:15:56.975137515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975150035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975161075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975172995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975184355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975197515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975212555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975223915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975239 containerd[1532]: time="2025-01-29T12:15:56.975235475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975369 containerd[1532]: time="2025-01-29T12:15:56.975250435Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:15:56.975369 containerd[1532]: time="2025-01-29T12:15:56.975269355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975369 containerd[1532]: time="2025-01-29T12:15:56.975280475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975369 containerd[1532]: time="2025-01-29T12:15:56.975290675Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:15:56.975436 containerd[1532]: time="2025-01-29T12:15:56.975389675Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:15:56.975436 containerd[1532]: time="2025-01-29T12:15:56.975404235Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:15:56.975436 containerd[1532]: time="2025-01-29T12:15:56.975414875Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:15:56.975436 containerd[1532]: time="2025-01-29T12:15:56.975425915Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:15:56.975436 containerd[1532]: time="2025-01-29T12:15:56.975434595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975532 containerd[1532]: time="2025-01-29T12:15:56.975446475Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:15:56.975532 containerd[1532]: time="2025-01-29T12:15:56.975459995Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:15:56.975532 containerd[1532]: time="2025-01-29T12:15:56.975469995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:15:56.975890 containerd[1532]: time="2025-01-29T12:15:56.975828275Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:15:56.976000 containerd[1532]: time="2025-01-29T12:15:56.975894275Z" level=info msg="Connect containerd service" Jan 29 12:15:56.976000 containerd[1532]: time="2025-01-29T12:15:56.975988155Z" level=info msg="using legacy CRI server" Jan 29 12:15:56.976000 containerd[1532]: time="2025-01-29T12:15:56.975994755Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:15:56.976455 containerd[1532]: time="2025-01-29T12:15:56.976412755Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:15:56.978728 containerd[1532]: time="2025-01-29T12:15:56.978642195Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:15:56.979387 containerd[1532]: time="2025-01-29T12:15:56.978883395Z" level=info msg="Start subscribing containerd event" Jan 29 12:15:56.979387 containerd[1532]: time="2025-01-29T12:15:56.979030435Z" level=info msg="Start recovering state" Jan 29 12:15:56.979387 containerd[1532]: time="2025-01-29T12:15:56.979093195Z" level=info msg="Start event monitor" Jan 29 12:15:56.979387 containerd[1532]: time="2025-01-29T12:15:56.979104035Z" level=info msg="Start snapshots syncer" Jan 29 12:15:56.979387 containerd[1532]: time="2025-01-29T12:15:56.979112875Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:15:56.979387 containerd[1532]: time="2025-01-29T12:15:56.979121595Z" level=info msg="Start streaming server" Jan 29 12:15:56.979591 containerd[1532]: time="2025-01-29T12:15:56.979556395Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:15:56.979638 containerd[1532]: time="2025-01-29T12:15:56.979621155Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:15:56.979948 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:15:56.980850 containerd[1532]: time="2025-01-29T12:15:56.980814875Z" level=info msg="containerd successfully booted in 0.045991s" Jan 29 12:15:57.089059 tar[1529]: linux-arm64/LICENSE Jan 29 12:15:57.089163 tar[1529]: linux-arm64/README.md Jan 29 12:15:57.099908 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:15:57.197899 systemd-networkd[1229]: eth0: Gained IPv6LL Jan 29 12:15:57.200338 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:15:57.202175 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:15:57.208979 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:15:57.211717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:15:57.215237 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:15:57.233548 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:15:57.233797 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:15:57.235180 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:15:57.240105 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:15:57.280798 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:15:57.299078 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:15:57.312024 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:15:57.317147 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:15:57.317371 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:15:57.320138 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:15:57.333359 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:15:57.348181 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:15:57.350111 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 12:15:57.351165 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:15:57.700043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:15:57.701225 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:15:57.702539 systemd[1]: Startup finished in 6.048s (kernel) + 3.127s (userspace) = 9.176s. Jan 29 12:15:57.703640 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:15:58.183477 kubelet[1634]: E0129 12:15:58.183333 1634 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:15:58.185537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:15:58.185740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:16:01.620352 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:16:01.629012 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:58066.service - OpenSSH per-connection server daemon (10.0.0.1:58066). Jan 29 12:16:01.685394 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 58066 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:01.689013 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:01.704345 systemd-logind[1510]: New session 1 of user core. Jan 29 12:16:01.705209 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:16:01.720955 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:16:01.731711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:16:01.734274 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:16:01.740154 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:16:01.806423 systemd[1655]: Queued start job for default target default.target. Jan 29 12:16:01.806798 systemd[1655]: Created slice app.slice - User Application Slice. Jan 29 12:16:01.806821 systemd[1655]: Reached target paths.target - Paths. Jan 29 12:16:01.806831 systemd[1655]: Reached target timers.target - Timers. Jan 29 12:16:01.816890 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:16:01.822199 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:16:01.822253 systemd[1655]: Reached target sockets.target - Sockets. Jan 29 12:16:01.822264 systemd[1655]: Reached target basic.target - Basic System. Jan 29 12:16:01.822300 systemd[1655]: Reached target default.target - Main User Target. Jan 29 12:16:01.822323 systemd[1655]: Startup finished in 77ms. Jan 29 12:16:01.822637 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:16:01.824203 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:16:01.883022 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:58068.service - OpenSSH per-connection server daemon (10.0.0.1:58068). Jan 29 12:16:01.916442 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 58068 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:01.917989 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:01.921851 systemd-logind[1510]: New session 2 of user core. Jan 29 12:16:01.929175 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:16:01.981041 sshd[1667]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:01.990090 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:58080.service - OpenSSH per-connection server daemon (10.0.0.1:58080). Jan 29 12:16:01.990552 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:58068.service: Deactivated successfully. Jan 29 12:16:01.991948 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:16:01.992478 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:16:01.993540 systemd-logind[1510]: Removed session 2. Jan 29 12:16:02.024204 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 58080 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:02.025514 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:02.029824 systemd-logind[1510]: New session 3 of user core. Jan 29 12:16:02.035992 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:16:02.084406 sshd[1672]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:02.093013 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:58094.service - OpenSSH per-connection server daemon (10.0.0.1:58094). Jan 29 12:16:02.093383 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:58080.service: Deactivated successfully. Jan 29 12:16:02.095240 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:16:02.095744 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:16:02.097243 systemd-logind[1510]: Removed session 3. Jan 29 12:16:02.127155 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 58094 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:02.128409 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:02.132837 systemd-logind[1510]: New session 4 of user core. Jan 29 12:16:02.139013 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:16:02.190928 sshd[1680]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:02.201020 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:58108.service - OpenSSH per-connection server daemon (10.0.0.1:58108). Jan 29 12:16:02.201827 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:58094.service: Deactivated successfully. Jan 29 12:16:02.203513 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:16:02.203578 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:16:02.204951 systemd-logind[1510]: Removed session 4. Jan 29 12:16:02.234699 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 58108 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:02.235874 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:02.239923 systemd-logind[1510]: New session 5 of user core. Jan 29 12:16:02.249016 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:16:02.306901 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:16:02.307165 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:16:02.326523 sudo[1695]: pam_unix(sudo:session): session closed for user root Jan 29 12:16:02.328191 sshd[1688]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:02.336983 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:58124.service - OpenSSH per-connection server daemon (10.0.0.1:58124). Jan 29 12:16:02.337341 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:58108.service: Deactivated successfully. Jan 29 12:16:02.339091 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:16:02.339653 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:16:02.341114 systemd-logind[1510]: Removed session 5. Jan 29 12:16:02.371143 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 58124 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:02.372358 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:02.376170 systemd-logind[1510]: New session 6 of user core. Jan 29 12:16:02.388142 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:16:02.440283 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:16:02.440563 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:16:02.444333 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 29 12:16:02.449205 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:16:02.449464 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:16:02.464987 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:16:02.466324 auditctl[1708]: No rules Jan 29 12:16:02.467178 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:16:02.467414 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:16:02.472025 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:16:02.495647 augenrules[1727]: No rules Jan 29 12:16:02.496917 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:16:02.498185 sudo[1704]: pam_unix(sudo:session): session closed for user root Jan 29 12:16:02.500017 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:02.514004 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:46730.service - OpenSSH per-connection server daemon (10.0.0.1:46730). Jan 29 12:16:02.514358 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:58124.service: Deactivated successfully. Jan 29 12:16:02.516789 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:16:02.516860 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:16:02.518429 systemd-logind[1510]: Removed session 6. Jan 29 12:16:02.550047 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 46730 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:02.551410 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:02.555144 systemd-logind[1510]: New session 7 of user core. Jan 29 12:16:02.577083 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:16:02.629874 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:16:02.630151 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:16:02.934976 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:16:02.935260 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:16:03.188895 dockerd[1758]: time="2025-01-29T12:16:03.188736115Z" level=info msg="Starting up" Jan 29 12:16:03.412630 dockerd[1758]: time="2025-01-29T12:16:03.412580475Z" level=info msg="Loading containers: start." Jan 29 12:16:03.504800 kernel: Initializing XFRM netlink socket Jan 29 12:16:03.565226 systemd-networkd[1229]: docker0: Link UP Jan 29 12:16:03.648913 dockerd[1758]: time="2025-01-29T12:16:03.648865155Z" level=info msg="Loading containers: done." Jan 29 12:16:03.686918 dockerd[1758]: time="2025-01-29T12:16:03.686860915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:16:03.687100 dockerd[1758]: time="2025-01-29T12:16:03.686969915Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:16:03.687100 dockerd[1758]: time="2025-01-29T12:16:03.687090195Z" level=info msg="Daemon has completed initialization" Jan 29 12:16:03.716382 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:16:03.716807 dockerd[1758]: time="2025-01-29T12:16:03.715994275Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:16:04.446891 containerd[1532]: time="2025-01-29T12:16:04.446839035Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:16:05.289662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928764773.mount: Deactivated successfully. Jan 29 12:16:06.882930 containerd[1532]: time="2025-01-29T12:16:06.882865515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:06.883438 containerd[1532]: time="2025-01-29T12:16:06.883404235Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 29 12:16:06.884669 containerd[1532]: time="2025-01-29T12:16:06.884623835Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:06.887691 containerd[1532]: time="2025-01-29T12:16:06.887650035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:06.888554 containerd[1532]: time="2025-01-29T12:16:06.888437515Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.4415556s" Jan 29 12:16:06.888554 containerd[1532]: time="2025-01-29T12:16:06.888477875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 12:16:06.906506 containerd[1532]: time="2025-01-29T12:16:06.906415995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:16:08.382503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:16:08.395027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:16:08.498207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:16:08.501696 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:16:08.564038 kubelet[1989]: E0129 12:16:08.563986 1989 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:16:08.567082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:16:08.567262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:16:08.584607 containerd[1532]: time="2025-01-29T12:16:08.584560675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:08.585203 containerd[1532]: time="2025-01-29T12:16:08.585173915Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 29 12:16:08.586131 containerd[1532]: time="2025-01-29T12:16:08.586083995Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:08.588827 containerd[1532]: time="2025-01-29T12:16:08.588798875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:08.590034 containerd[1532]: time="2025-01-29T12:16:08.589979515Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.68352768s" Jan 29 12:16:08.590034 containerd[1532]: time="2025-01-29T12:16:08.590013195Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 12:16:08.608837 containerd[1532]: time="2025-01-29T12:16:08.608763995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:16:10.214722 containerd[1532]: time="2025-01-29T12:16:10.214669915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:10.215761 containerd[1532]: time="2025-01-29T12:16:10.215521915Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 29 12:16:10.216402 containerd[1532]: time="2025-01-29T12:16:10.216367555Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:10.219398 containerd[1532]: time="2025-01-29T12:16:10.219366955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:10.220496 containerd[1532]: time="2025-01-29T12:16:10.220468195Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.61165784s" Jan 29 12:16:10.220546 containerd[1532]: time="2025-01-29T12:16:10.220503515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 12:16:10.238721 containerd[1532]: time="2025-01-29T12:16:10.238687675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:16:11.385195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113864731.mount: Deactivated successfully. Jan 29 12:16:11.731420 containerd[1532]: time="2025-01-29T12:16:11.731195355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:11.732356 containerd[1532]: time="2025-01-29T12:16:11.732125955Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 12:16:11.733109 containerd[1532]: time="2025-01-29T12:16:11.733044115Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:11.735457 containerd[1532]: time="2025-01-29T12:16:11.735386955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:11.736094 containerd[1532]: time="2025-01-29T12:16:11.735956075Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.49721768s" Jan 29 12:16:11.736094 containerd[1532]: time="2025-01-29T12:16:11.735991955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 12:16:11.753522 containerd[1532]: time="2025-01-29T12:16:11.753490035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:16:12.565942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555305972.mount: Deactivated successfully. Jan 29 12:16:13.388584 containerd[1532]: time="2025-01-29T12:16:13.388533115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:13.389005 containerd[1532]: time="2025-01-29T12:16:13.388958915Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 12:16:13.390005 containerd[1532]: time="2025-01-29T12:16:13.389954595Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:13.393059 containerd[1532]: time="2025-01-29T12:16:13.393004475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:13.396837 containerd[1532]: time="2025-01-29T12:16:13.396673435Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.64300796s" Jan 29 12:16:13.396837 containerd[1532]: time="2025-01-29T12:16:13.396718675Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 12:16:13.415026 containerd[1532]: time="2025-01-29T12:16:13.414989475Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:16:13.965554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863821691.mount: Deactivated successfully. Jan 29 12:16:13.968792 containerd[1532]: time="2025-01-29T12:16:13.968745915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:13.969153 containerd[1532]: time="2025-01-29T12:16:13.969127075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 29 12:16:13.969920 containerd[1532]: time="2025-01-29T12:16:13.969886315Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:13.972204 containerd[1532]: time="2025-01-29T12:16:13.972145115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:13.973279 containerd[1532]: time="2025-01-29T12:16:13.973163195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 557.92032ms" Jan 29 12:16:13.973279 containerd[1532]: time="2025-01-29T12:16:13.973197835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 12:16:13.990789 containerd[1532]: time="2025-01-29T12:16:13.990744075Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:16:14.656846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839890.mount: Deactivated successfully. Jan 29 12:16:16.474945 containerd[1532]: time="2025-01-29T12:16:16.474893835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:16.475448 containerd[1532]: time="2025-01-29T12:16:16.475407995Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 29 12:16:16.476808 containerd[1532]: time="2025-01-29T12:16:16.476190475Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:16.479894 containerd[1532]: time="2025-01-29T12:16:16.479839195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:16.481037 containerd[1532]: time="2025-01-29T12:16:16.481007835Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.49020756s" Jan 29 12:16:16.481218 containerd[1532]: time="2025-01-29T12:16:16.481107155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 12:16:18.631013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:16:18.639938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:16:18.818014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:16:18.820690 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:16:18.855119 kubelet[2222]: E0129 12:16:18.855076 2222 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:16:18.857697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:16:18.857896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:16:20.728524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:16:20.740007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:16:20.757901 systemd[1]: Reloading requested from client PID 2239 ('systemctl') (unit session-7.scope)... Jan 29 12:16:20.757919 systemd[1]: Reloading... Jan 29 12:16:20.817811 zram_generator::config[2281]: No configuration found. Jan 29 12:16:20.955027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:16:21.005272 systemd[1]: Reloading finished in 247 ms. Jan 29 12:16:21.036453 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:16:21.036519 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:16:21.036748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:16:21.038752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:16:21.127902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:16:21.131576 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:16:21.171969 kubelet[2336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:16:21.171969 kubelet[2336]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:16:21.171969 kubelet[2336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:16:21.172295 kubelet[2336]: I0129 12:16:21.172124 2336 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:16:22.182796 kubelet[2336]: I0129 12:16:22.182543 2336 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:16:22.182796 kubelet[2336]: I0129 12:16:22.182569 2336 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:16:22.182796 kubelet[2336]: I0129 12:16:22.182762 2336 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:16:22.264812 kubelet[2336]: I0129 12:16:22.264763 2336 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:16:22.264925 kubelet[2336]: E0129 12:16:22.264878 2336 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.274012 kubelet[2336]: I0129 12:16:22.273987 2336 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:16:22.275171 kubelet[2336]: I0129 12:16:22.275139 2336 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:16:22.275328 kubelet[2336]: I0129 12:16:22.275173 2336 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:16:22.275413 kubelet[2336]: I0129 12:16:22.275401 2336 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:16:22.275413 kubelet[2336]: I0129 12:16:22.275411 2336 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:16:22.275671 kubelet[2336]: I0129 12:16:22.275658 2336 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:16:22.276914 kubelet[2336]: I0129 12:16:22.276892 2336 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:16:22.276914 kubelet[2336]: I0129 12:16:22.276914 2336 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:16:22.277026 kubelet[2336]: I0129 12:16:22.277010 2336 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:16:22.277156 kubelet[2336]: I0129 12:16:22.277139 2336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:16:22.278411 kubelet[2336]: I0129 12:16:22.278106 2336 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:16:22.278487 kubelet[2336]: I0129 12:16:22.278476 2336 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:16:22.278593 kubelet[2336]: W0129 12:16:22.278574 2336 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:16:22.278691 kubelet[2336]: W0129 12:16:22.278643 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.278723 kubelet[2336]: E0129 12:16:22.278700 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.278723 kubelet[2336]: W0129 12:16:22.278658 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.278768 kubelet[2336]: E0129 12:16:22.278730 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.279354 kubelet[2336]: I0129 12:16:22.279321 2336 server.go:1264] "Started kubelet" Jan 29 12:16:22.279751 kubelet[2336]: I0129 12:16:22.279709 2336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:16:22.280127 kubelet[2336]: I0129 12:16:22.280108 2336 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:16:22.280225 kubelet[2336]: I0129 12:16:22.280207 2336 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:16:22.280388 kubelet[2336]: I0129 12:16:22.280334 2336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:16:22.281454 kubelet[2336]: I0129 12:16:22.281300 2336 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:16:22.285448 kubelet[2336]: I0129 12:16:22.284669 2336 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:16:22.285448 kubelet[2336]: I0129 12:16:22.285337 2336 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:16:22.286646 kubelet[2336]: I0129 12:16:22.286248 2336 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:16:22.286646 kubelet[2336]: E0129 12:16:22.284187 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f28eda20b3593 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:16:22.279304595 +0000 UTC m=+1.144864921,LastTimestamp:2025-01-29 12:16:22.279304595 +0000 UTC m=+1.144864921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:16:22.286646 kubelet[2336]: E0129 12:16:22.286425 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Jan 29 12:16:22.286646 kubelet[2336]: W0129 12:16:22.286578 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.286646 kubelet[2336]: E0129 12:16:22.286616 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.294223 kubelet[2336]: I0129 12:16:22.294177 2336 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:16:22.294223 kubelet[2336]: I0129 12:16:22.294205 2336 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:16:22.294336 kubelet[2336]: I0129 12:16:22.294273 2336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:16:22.301935 kubelet[2336]: I0129 12:16:22.301893 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:16:22.303241 kubelet[2336]: I0129 12:16:22.303221 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:16:22.303361 kubelet[2336]: I0129 12:16:22.303339 2336 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:16:22.303432 kubelet[2336]: I0129 12:16:22.303422 2336 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:16:22.303527 kubelet[2336]: E0129 12:16:22.303510 2336 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:16:22.309368 kubelet[2336]: W0129 12:16:22.308758 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.309450 kubelet[2336]: E0129 12:16:22.309385 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:22.311717 kubelet[2336]: I0129 12:16:22.311698 2336 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:16:22.311824 kubelet[2336]: I0129 12:16:22.311813 2336 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:16:22.311899 kubelet[2336]: I0129 12:16:22.311891 2336 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:16:22.313887 kubelet[2336]: I0129 12:16:22.313872 2336 policy_none.go:49] "None policy: Start" Jan 29 12:16:22.314791 kubelet[2336]: I0129 12:16:22.314551 2336 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:16:22.314791 kubelet[2336]: I0129 12:16:22.314575 2336 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:16:22.318453 kubelet[2336]: I0129 12:16:22.318426 2336 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:16:22.319959 kubelet[2336]: I0129 12:16:22.318623 2336 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:16:22.319959 kubelet[2336]: I0129 12:16:22.318718 2336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:16:22.320916 kubelet[2336]: E0129 12:16:22.320896 2336 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 12:16:22.386040 kubelet[2336]: I0129 12:16:22.386011 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:16:22.386690 kubelet[2336]: E0129 12:16:22.386658 2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jan 29 12:16:22.404051 kubelet[2336]: I0129 12:16:22.403994 2336 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:16:22.404961 kubelet[2336]: I0129 12:16:22.404937 2336 topology_manager.go:215] "Topology Admit Handler" podUID="3825300370e749af74d4912a3991ed2b" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:16:22.405952 kubelet[2336]: I0129 12:16:22.405754 2336 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:16:22.487736 kubelet[2336]: I0129 12:16:22.487553 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:22.487736 kubelet[2336]: E0129 12:16:22.487564 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Jan 29 12:16:22.487736 kubelet[2336]: I0129 12:16:22.487591 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:22.487736 kubelet[2336]: I0129 12:16:22.487619 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:22.487736 kubelet[2336]: I0129 12:16:22.487639 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:16:22.487940 kubelet[2336]: I0129 12:16:22.487654 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3825300370e749af74d4912a3991ed2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3825300370e749af74d4912a3991ed2b\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:16:22.487940 kubelet[2336]: I0129 12:16:22.487669 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3825300370e749af74d4912a3991ed2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3825300370e749af74d4912a3991ed2b\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:16:22.487940 kubelet[2336]: I0129 12:16:22.487685 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:22.487940 kubelet[2336]: I0129 12:16:22.487700 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:22.487940 kubelet[2336]: I0129 12:16:22.487715 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3825300370e749af74d4912a3991ed2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3825300370e749af74d4912a3991ed2b\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:16:22.588438 kubelet[2336]: I0129 12:16:22.588398 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:16:22.588754 kubelet[2336]: E0129 12:16:22.588730 2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jan 29 12:16:22.648211 kubelet[2336]: E0129 12:16:22.648120 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f28eda20b3593 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:16:22.279304595 +0000 UTC m=+1.144864921,LastTimestamp:2025-01-29 12:16:22.279304595 +0000 UTC m=+1.144864921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:16:22.709032 kubelet[2336]: E0129 12:16:22.709004 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:22.709615 kubelet[2336]: E0129 12:16:22.709500 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:22.709673 containerd[1532]: time="2025-01-29T12:16:22.709630755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:22.710522 containerd[1532]: time="2025-01-29T12:16:22.710320715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3825300370e749af74d4912a3991ed2b,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:22.711807 kubelet[2336]: E0129 12:16:22.711785 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:22.712241 containerd[1532]: time="2025-01-29T12:16:22.712211915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:22.888563 kubelet[2336]: E0129 12:16:22.888525 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Jan 29 12:16:22.989990 kubelet[2336]: I0129 12:16:22.989951 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:16:22.990284 kubelet[2336]: E0129 12:16:22.990253 2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jan 29 12:16:23.145423 kubelet[2336]: W0129 12:16:23.145277 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.145423 kubelet[2336]: E0129 12:16:23.145330 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.180257 kubelet[2336]: W0129 12:16:23.177994 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.180257 kubelet[2336]: E0129 12:16:23.178045 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.258360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2465450639.mount: Deactivated successfully. Jan 29 12:16:23.262452 containerd[1532]: time="2025-01-29T12:16:23.262395435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:16:23.263920 containerd[1532]: time="2025-01-29T12:16:23.263880875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:16:23.264951 containerd[1532]: time="2025-01-29T12:16:23.264915915Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:16:23.265796 containerd[1532]: time="2025-01-29T12:16:23.265735995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 12:16:23.266368 containerd[1532]: time="2025-01-29T12:16:23.266301515Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:16:23.267851 containerd[1532]: time="2025-01-29T12:16:23.267823075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:16:23.268431 containerd[1532]: time="2025-01-29T12:16:23.268383635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:16:23.270596 containerd[1532]: time="2025-01-29T12:16:23.270555635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:16:23.272731 containerd[1532]: time="2025-01-29T12:16:23.272698515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.99208ms" Jan 29 12:16:23.273345 containerd[1532]: time="2025-01-29T12:16:23.273075755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.79336ms" Jan 29 12:16:23.276961 containerd[1532]: time="2025-01-29T12:16:23.276769475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.3792ms" Jan 29 12:16:23.444055 containerd[1532]: time="2025-01-29T12:16:23.443792235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:23.444055 containerd[1532]: time="2025-01-29T12:16:23.443855715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:23.444055 containerd[1532]: time="2025-01-29T12:16:23.443871235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:23.444697 containerd[1532]: time="2025-01-29T12:16:23.444561995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:23.446089 containerd[1532]: time="2025-01-29T12:16:23.446023195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:23.446168 containerd[1532]: time="2025-01-29T12:16:23.446067595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:23.446168 containerd[1532]: time="2025-01-29T12:16:23.446098595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:23.446209 containerd[1532]: time="2025-01-29T12:16:23.446177435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:23.446825 containerd[1532]: time="2025-01-29T12:16:23.446558435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:23.446825 containerd[1532]: time="2025-01-29T12:16:23.446604955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:23.446825 containerd[1532]: time="2025-01-29T12:16:23.446616155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:23.446825 containerd[1532]: time="2025-01-29T12:16:23.446689235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:23.491254 containerd[1532]: time="2025-01-29T12:16:23.491132555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29862488b7aa3a2a53dc3b7512fb41adef3d40cad2add0d844a511ea8d19baa\"" Jan 29 12:16:23.492890 kubelet[2336]: E0129 12:16:23.492859 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:23.495137 containerd[1532]: time="2025-01-29T12:16:23.495032195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d18b15374cd15e30b0e34eaf76921762151e0097f37da24d813c5caeb21e328\"" Jan 29 12:16:23.496239 containerd[1532]: time="2025-01-29T12:16:23.496210515Z" level=info msg="CreateContainer within sandbox \"d29862488b7aa3a2a53dc3b7512fb41adef3d40cad2add0d844a511ea8d19baa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:16:23.496337 kubelet[2336]: E0129 12:16:23.496253 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:23.499794 containerd[1532]: time="2025-01-29T12:16:23.498680475Z" level=info msg="CreateContainer within sandbox \"3d18b15374cd15e30b0e34eaf76921762151e0097f37da24d813c5caeb21e328\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:16:23.500151 containerd[1532]: time="2025-01-29T12:16:23.500124315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3825300370e749af74d4912a3991ed2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d720483f75ca117b94d747d025157af34124a315b9ee64599d2e004e3c773e0b\"" Jan 29 12:16:23.500934 kubelet[2336]: E0129 12:16:23.500902 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:23.503345 containerd[1532]: time="2025-01-29T12:16:23.503304635Z" level=info msg="CreateContainer within sandbox \"d720483f75ca117b94d747d025157af34124a315b9ee64599d2e004e3c773e0b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:16:23.516804 containerd[1532]: time="2025-01-29T12:16:23.516746155Z" level=info msg="CreateContainer within sandbox \"3d18b15374cd15e30b0e34eaf76921762151e0097f37da24d813c5caeb21e328\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94b4d6b433f28c1e5993c1840a9ad9fd4dea7fb2e7d7f5b8a0c9a134070ba7ff\"" Jan 29 12:16:23.517649 containerd[1532]: time="2025-01-29T12:16:23.517623435Z" level=info msg="StartContainer for \"94b4d6b433f28c1e5993c1840a9ad9fd4dea7fb2e7d7f5b8a0c9a134070ba7ff\"" Jan 29 12:16:23.518540 containerd[1532]: time="2025-01-29T12:16:23.518500555Z" level=info msg="CreateContainer within sandbox \"d29862488b7aa3a2a53dc3b7512fb41adef3d40cad2add0d844a511ea8d19baa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18b5657b60383eb8df75ed7a861da25e1beaae58937c5fa8084b08dbd7240b6c\"" Jan 29 12:16:23.519833 containerd[1532]: time="2025-01-29T12:16:23.518898755Z" level=info msg="StartContainer for \"18b5657b60383eb8df75ed7a861da25e1beaae58937c5fa8084b08dbd7240b6c\"" Jan 29 12:16:23.520607 containerd[1532]: time="2025-01-29T12:16:23.520577875Z" level=info msg="CreateContainer within sandbox \"d720483f75ca117b94d747d025157af34124a315b9ee64599d2e004e3c773e0b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e0f864741bbcf0212ba274216bfbbb2570addbf69f974f74ad7ca36a67a9b4ac\"" Jan 29 12:16:23.521059 containerd[1532]: time="2025-01-29T12:16:23.521034035Z" level=info msg="StartContainer for \"e0f864741bbcf0212ba274216bfbbb2570addbf69f974f74ad7ca36a67a9b4ac\"" Jan 29 12:16:23.602793 containerd[1532]: time="2025-01-29T12:16:23.600670515Z" level=info msg="StartContainer for \"18b5657b60383eb8df75ed7a861da25e1beaae58937c5fa8084b08dbd7240b6c\" returns successfully" Jan 29 12:16:23.602793 containerd[1532]: time="2025-01-29T12:16:23.600687555Z" level=info msg="StartContainer for \"94b4d6b433f28c1e5993c1840a9ad9fd4dea7fb2e7d7f5b8a0c9a134070ba7ff\" returns successfully" Jan 29 12:16:23.602793 containerd[1532]: time="2025-01-29T12:16:23.600694035Z" level=info msg="StartContainer for \"e0f864741bbcf0212ba274216bfbbb2570addbf69f974f74ad7ca36a67a9b4ac\" returns successfully" Jan 29 12:16:23.661289 kubelet[2336]: W0129 12:16:23.661198 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.661289 kubelet[2336]: E0129 12:16:23.661266 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.676899 kubelet[2336]: W0129 12:16:23.676812 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.676899 kubelet[2336]: E0129 12:16:23.676871 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 12:16:23.689191 kubelet[2336]: E0129 12:16:23.689119 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Jan 29 12:16:23.794193 kubelet[2336]: I0129 12:16:23.794001 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:16:24.316007 kubelet[2336]: E0129 12:16:24.315840 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:24.318185 kubelet[2336]: E0129 12:16:24.318088 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:24.320404 kubelet[2336]: E0129 12:16:24.320357 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:25.323096 kubelet[2336]: E0129 12:16:25.323035 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:25.374165 kubelet[2336]: E0129 12:16:25.374122 2336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 12:16:25.470416 kubelet[2336]: I0129 12:16:25.470203 2336 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:16:25.479862 kubelet[2336]: E0129 12:16:25.479828 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:25.580819 kubelet[2336]: E0129 12:16:25.580685 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:25.681267 kubelet[2336]: E0129 12:16:25.681228 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:25.782013 kubelet[2336]: E0129 12:16:25.781975 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:25.882522 kubelet[2336]: E0129 12:16:25.882488 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:25.983066 kubelet[2336]: E0129 12:16:25.983026 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:26.083761 kubelet[2336]: E0129 12:16:26.083714 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:26.184474 kubelet[2336]: E0129 12:16:26.184368 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:16:26.279675 kubelet[2336]: I0129 12:16:26.279631 2336 apiserver.go:52] "Watching apiserver" Jan 29 12:16:26.285595 kubelet[2336]: I0129 12:16:26.285559 2336 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:16:26.328018 kubelet[2336]: E0129 12:16:26.327986 2336 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 12:16:26.328450 kubelet[2336]: E0129 12:16:26.328432 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:27.356957 kubelet[2336]: E0129 12:16:27.356928 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:27.589061 systemd[1]: Reloading requested from client PID 2609 ('systemctl') (unit session-7.scope)... Jan 29 12:16:27.589078 systemd[1]: Reloading... Jan 29 12:16:27.645798 zram_generator::config[2651]: No configuration found. Jan 29 12:16:27.728923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:16:27.783368 systemd[1]: Reloading finished in 194 ms. Jan 29 12:16:27.808168 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:16:27.817448 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:16:27.817749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:16:27.828074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:16:27.915565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:16:27.919592 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:16:27.955091 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:16:27.955091 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:16:27.955091 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:16:27.955456 kubelet[2700]: I0129 12:16:27.955131 2700 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:16:27.959009 kubelet[2700]: I0129 12:16:27.958977 2700 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:16:27.959009 kubelet[2700]: I0129 12:16:27.959003 2700 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:16:27.959170 kubelet[2700]: I0129 12:16:27.959155 2700 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:16:27.960451 kubelet[2700]: I0129 12:16:27.960430 2700 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:16:27.961635 kubelet[2700]: I0129 12:16:27.961612 2700 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:16:27.967998 kubelet[2700]: I0129 12:16:27.967972 2700 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:16:27.968407 kubelet[2700]: I0129 12:16:27.968367 2700 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:16:27.968552 kubelet[2700]: I0129 12:16:27.968401 2700 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:16:27.968637 kubelet[2700]: I0129 12:16:27.968554 2700 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:16:27.968637 kubelet[2700]: I0129 12:16:27.968563 2700 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:16:27.968637 kubelet[2700]: I0129 12:16:27.968593 2700 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:16:27.968695 kubelet[2700]: I0129 12:16:27.968679 2700 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:16:27.968695 kubelet[2700]: I0129 12:16:27.968691 2700 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:16:27.968745 kubelet[2700]: I0129 12:16:27.968716 2700 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:16:27.968745 kubelet[2700]: I0129 12:16:27.968728 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:16:27.969808 kubelet[2700]: I0129 12:16:27.969429 2700 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:16:27.969808 kubelet[2700]: I0129 12:16:27.969605 2700 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:16:27.970401 kubelet[2700]: I0129 12:16:27.970008 2700 server.go:1264] "Started kubelet" Jan 29 12:16:27.973796 kubelet[2700]: I0129 12:16:27.971077 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:16:27.973796 kubelet[2700]: I0129 12:16:27.971391 2700 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:16:27.973796 kubelet[2700]: I0129 12:16:27.971433 2700 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:16:27.973796 kubelet[2700]: I0129 12:16:27.971758 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:16:27.973796 kubelet[2700]: I0129 12:16:27.972451 2700 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:16:27.977001 kubelet[2700]: I0129 12:16:27.976978 2700 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:16:27.977091 kubelet[2700]: I0129 12:16:27.977074 2700 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:16:27.977234 kubelet[2700]: I0129 12:16:27.977214 2700 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:16:27.979209 kubelet[2700]: E0129 12:16:27.979188 2700 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:16:27.992172 kubelet[2700]: I0129 12:16:27.992113 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:16:27.993159 kubelet[2700]: I0129 12:16:27.993127 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:16:27.993217 kubelet[2700]: I0129 12:16:27.993173 2700 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:16:27.993217 kubelet[2700]: I0129 12:16:27.993192 2700 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:16:27.993971 kubelet[2700]: E0129 12:16:27.993823 2700 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:16:27.996435 kubelet[2700]: I0129 12:16:27.994420 2700 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:16:27.996435 kubelet[2700]: I0129 12:16:27.994518 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:16:28.004114 kubelet[2700]: I0129 12:16:28.004087 2700 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:16:28.039494 kubelet[2700]: I0129 12:16:28.039464 2700 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:16:28.039494 kubelet[2700]: I0129 12:16:28.039487 2700 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:16:28.039631 kubelet[2700]: I0129 12:16:28.039507 2700 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:16:28.039678 kubelet[2700]: I0129 12:16:28.039662 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:16:28.039702 kubelet[2700]: I0129 12:16:28.039677 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:16:28.039702 kubelet[2700]: I0129 12:16:28.039695 2700 policy_none.go:49] "None policy: Start" Jan 29 12:16:28.041520 kubelet[2700]: I0129 12:16:28.040329 2700 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:16:28.041520 kubelet[2700]: I0129 12:16:28.040355 2700 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:16:28.041520 kubelet[2700]: I0129 12:16:28.040477 2700 state_mem.go:75] "Updated machine memory state" Jan 29 12:16:28.041857 kubelet[2700]: I0129 12:16:28.041481 2700 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:16:28.042101 kubelet[2700]: I0129 12:16:28.042061 2700 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:16:28.042257 kubelet[2700]: I0129 12:16:28.042245 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:16:28.078551 kubelet[2700]: I0129 12:16:28.078489 2700 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:16:28.085857 kubelet[2700]: I0129 12:16:28.085742 2700 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 12:16:28.085988 kubelet[2700]: I0129 12:16:28.085933 2700 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:16:28.093980 kubelet[2700]: I0129 12:16:28.093944 2700 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:16:28.094296 kubelet[2700]: I0129 12:16:28.094204 2700 topology_manager.go:215] "Topology Admit Handler" podUID="3825300370e749af74d4912a3991ed2b" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:16:28.094296 kubelet[2700]: I0129 12:16:28.094247 2700 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:16:28.099986 kubelet[2700]: E0129 12:16:28.099927 2700 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 12:16:28.178363 kubelet[2700]: I0129 12:16:28.178242 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3825300370e749af74d4912a3991ed2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3825300370e749af74d4912a3991ed2b\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:16:28.178363 kubelet[2700]: I0129 12:16:28.178292 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:28.178363 kubelet[2700]: I0129 12:16:28.178314 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:28.178363 kubelet[2700]: I0129 12:16:28.178332 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:28.178363 kubelet[2700]: I0129 12:16:28.178349 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:16:28.178529 kubelet[2700]: I0129 12:16:28.178365 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3825300370e749af74d4912a3991ed2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3825300370e749af74d4912a3991ed2b\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:16:28.178529 kubelet[2700]: I0129 12:16:28.178382 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3825300370e749af74d4912a3991ed2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3825300370e749af74d4912a3991ed2b\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:16:28.178529 kubelet[2700]: I0129 12:16:28.178396 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:28.178529 kubelet[2700]: I0129 12:16:28.178416 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:16:28.400518 kubelet[2700]: E0129 12:16:28.400460 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:28.400912 kubelet[2700]: E0129 12:16:28.400879 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:28.401067 kubelet[2700]: E0129 12:16:28.401034 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:28.969716 kubelet[2700]: I0129 12:16:28.969676 2700 apiserver.go:52] "Watching apiserver" Jan 29 12:16:28.978214 kubelet[2700]: I0129 12:16:28.978136 2700 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:16:29.017252 kubelet[2700]: E0129 12:16:29.016927 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:29.017252 kubelet[2700]: E0129 12:16:29.017033 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:29.022879 kubelet[2700]: E0129 12:16:29.022828 2700 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 12:16:29.023482 kubelet[2700]: E0129 12:16:29.023132 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:29.044308 kubelet[2700]: I0129 12:16:29.044206 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.044174978 podStartE2EDuration="1.044174978s" podCreationTimestamp="2025-01-29 12:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:16:29.036819904 +0000 UTC m=+1.114031350" watchObservedRunningTime="2025-01-29 12:16:29.044174978 +0000 UTC m=+1.121386424" Jan 29 12:16:29.044464 kubelet[2700]: I0129 12:16:29.044331 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.044326499 podStartE2EDuration="2.044326499s" podCreationTimestamp="2025-01-29 12:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:16:29.044310939 +0000 UTC m=+1.121522385" watchObservedRunningTime="2025-01-29 12:16:29.044326499 +0000 UTC m=+1.121537945" Jan 29 12:16:29.059631 kubelet[2700]: I0129 12:16:29.059569 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.059552891 podStartE2EDuration="1.059552891s" podCreationTimestamp="2025-01-29 12:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:16:29.052057857 +0000 UTC m=+1.129269263" watchObservedRunningTime="2025-01-29 12:16:29.059552891 +0000 UTC m=+1.136764337" Jan 29 12:16:30.018399 kubelet[2700]: E0129 12:16:30.018353 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:30.023799 kubelet[2700]: E0129 12:16:30.021111 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:31.019353 kubelet[2700]: E0129 12:16:31.019316 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:32.762703 sudo[1740]: pam_unix(sudo:session): session closed for user root Jan 29 12:16:32.767119 sshd[1733]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:32.770759 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:46730.service: Deactivated successfully. Jan 29 12:16:32.772728 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:16:32.773430 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:16:32.774346 systemd-logind[1510]: Removed session 7. Jan 29 12:16:34.028833 kubelet[2700]: E0129 12:16:34.028765 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:35.024768 kubelet[2700]: E0129 12:16:35.024739 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:35.447758 kubelet[2700]: E0129 12:16:35.447721 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:36.026205 kubelet[2700]: E0129 12:16:36.025911 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:39.710172 kubelet[2700]: E0129 12:16:39.709858 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:40.032356 kubelet[2700]: E0129 12:16:40.032249 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:41.534578 update_engine[1515]: I20250129 12:16:41.534511 1515 update_attempter.cc:509] Updating boot flags... Jan 29 12:16:41.559295 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2796) Jan 29 12:16:41.584915 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2794) Jan 29 12:16:41.611961 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2794) Jan 29 12:16:42.362112 kubelet[2700]: I0129 12:16:42.362063 2700 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:16:42.388969 containerd[1532]: time="2025-01-29T12:16:42.388912293Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:16:42.390582 kubelet[2700]: I0129 12:16:42.389717 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:16:43.095644 kubelet[2700]: I0129 12:16:43.095604 2700 topology_manager.go:215] "Topology Admit Handler" podUID="1aaefb32-ccc5-4d90-a73e-36d1478daa1a" podNamespace="kube-system" podName="kube-proxy-r4btk" Jan 29 12:16:43.262653 kubelet[2700]: I0129 12:16:43.262605 2700 topology_manager.go:215] "Topology Admit Handler" podUID="561402f6-cef3-416b-b59f-a2c781d99504" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-2srtl" Jan 29 12:16:43.274709 kubelet[2700]: I0129 12:16:43.274660 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1aaefb32-ccc5-4d90-a73e-36d1478daa1a-kube-proxy\") pod \"kube-proxy-r4btk\" (UID: \"1aaefb32-ccc5-4d90-a73e-36d1478daa1a\") " pod="kube-system/kube-proxy-r4btk" Jan 29 12:16:43.274709 kubelet[2700]: I0129 12:16:43.274700 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aaefb32-ccc5-4d90-a73e-36d1478daa1a-xtables-lock\") pod \"kube-proxy-r4btk\" (UID: \"1aaefb32-ccc5-4d90-a73e-36d1478daa1a\") " pod="kube-system/kube-proxy-r4btk" Jan 29 12:16:43.274709 kubelet[2700]: I0129 12:16:43.274718 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aaefb32-ccc5-4d90-a73e-36d1478daa1a-lib-modules\") pod \"kube-proxy-r4btk\" (UID: \"1aaefb32-ccc5-4d90-a73e-36d1478daa1a\") " pod="kube-system/kube-proxy-r4btk" Jan 29 12:16:43.274931 kubelet[2700]: I0129 12:16:43.274787 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54pt\" (UniqueName: \"kubernetes.io/projected/1aaefb32-ccc5-4d90-a73e-36d1478daa1a-kube-api-access-v54pt\") pod \"kube-proxy-r4btk\" (UID: \"1aaefb32-ccc5-4d90-a73e-36d1478daa1a\") " pod="kube-system/kube-proxy-r4btk" Jan 29 12:16:43.375388 kubelet[2700]: I0129 12:16:43.375276 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42rl\" (UniqueName: \"kubernetes.io/projected/561402f6-cef3-416b-b59f-a2c781d99504-kube-api-access-l42rl\") pod \"tigera-operator-7bc55997bb-2srtl\" (UID: \"561402f6-cef3-416b-b59f-a2c781d99504\") " pod="tigera-operator/tigera-operator-7bc55997bb-2srtl" Jan 29 12:16:43.375388 kubelet[2700]: I0129 12:16:43.375337 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/561402f6-cef3-416b-b59f-a2c781d99504-var-lib-calico\") pod \"tigera-operator-7bc55997bb-2srtl\" (UID: \"561402f6-cef3-416b-b59f-a2c781d99504\") " pod="tigera-operator/tigera-operator-7bc55997bb-2srtl" Jan 29 12:16:43.397939 kubelet[2700]: E0129 12:16:43.397897 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:43.401660 containerd[1532]: time="2025-01-29T12:16:43.401624234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4btk,Uid:1aaefb32-ccc5-4d90-a73e-36d1478daa1a,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:43.422710 containerd[1532]: time="2025-01-29T12:16:43.422485479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:43.422710 containerd[1532]: time="2025-01-29T12:16:43.422540279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:43.422710 containerd[1532]: time="2025-01-29T12:16:43.422555599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:43.422710 containerd[1532]: time="2025-01-29T12:16:43.422638559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:43.451088 containerd[1532]: time="2025-01-29T12:16:43.451036794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4btk,Uid:1aaefb32-ccc5-4d90-a73e-36d1478daa1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c7a63ec09ccb8f25b31692c2b9e6eea4948ef82c95775237e59441155267145\"" Jan 29 12:16:43.454023 kubelet[2700]: E0129 12:16:43.453994 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:43.460080 containerd[1532]: time="2025-01-29T12:16:43.460039151Z" level=info msg="CreateContainer within sandbox \"5c7a63ec09ccb8f25b31692c2b9e6eea4948ef82c95775237e59441155267145\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:16:43.472320 containerd[1532]: time="2025-01-29T12:16:43.472274200Z" level=info msg="CreateContainer within sandbox \"5c7a63ec09ccb8f25b31692c2b9e6eea4948ef82c95775237e59441155267145\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3b6b32cae6ea2bdb1ac869050441c78408ec00b4a6f55d3c99870c5c3d012af6\"" Jan 29 12:16:43.474367 containerd[1532]: time="2025-01-29T12:16:43.474339808Z" level=info msg="StartContainer for \"3b6b32cae6ea2bdb1ac869050441c78408ec00b4a6f55d3c99870c5c3d012af6\"" Jan 29 12:16:43.531880 containerd[1532]: time="2025-01-29T12:16:43.528952349Z" level=info msg="StartContainer for \"3b6b32cae6ea2bdb1ac869050441c78408ec00b4a6f55d3c99870c5c3d012af6\" returns successfully" Jan 29 12:16:43.569779 containerd[1532]: time="2025-01-29T12:16:43.568994351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-2srtl,Uid:561402f6-cef3-416b-b59f-a2c781d99504,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:16:43.590255 containerd[1532]: time="2025-01-29T12:16:43.588917712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:43.590255 containerd[1532]: time="2025-01-29T12:16:43.588988032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:43.590255 containerd[1532]: time="2025-01-29T12:16:43.589018112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:43.590255 containerd[1532]: time="2025-01-29T12:16:43.589105393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:43.638466 containerd[1532]: time="2025-01-29T12:16:43.638420872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-2srtl,Uid:561402f6-cef3-416b-b59f-a2c781d99504,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fc75fe7bb8ce64bfbda7188fcb060c5a58ae0995bc34278361972bf1fdda623d\"" Jan 29 12:16:43.641254 containerd[1532]: time="2025-01-29T12:16:43.641057043Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:16:44.039686 kubelet[2700]: E0129 12:16:44.039579 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:44.054753 kubelet[2700]: I0129 12:16:44.054708 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r4btk" podStartSLOduration=1.054680902 podStartE2EDuration="1.054680902s" podCreationTimestamp="2025-01-29 12:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:16:44.054618862 +0000 UTC m=+16.131830308" watchObservedRunningTime="2025-01-29 12:16:44.054680902 +0000 UTC m=+16.131892348" Jan 29 12:16:44.574156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415432093.mount: Deactivated successfully. Jan 29 12:16:44.837473 containerd[1532]: time="2025-01-29T12:16:44.837325670Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:44.838694 containerd[1532]: time="2025-01-29T12:16:44.838643195Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 12:16:44.839412 containerd[1532]: time="2025-01-29T12:16:44.839379038Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:44.841631 containerd[1532]: time="2025-01-29T12:16:44.841595366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:44.843167 containerd[1532]: time="2025-01-29T12:16:44.843131892Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.202043929s" Jan 29 12:16:44.843203 containerd[1532]: time="2025-01-29T12:16:44.843168532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 12:16:44.855403 containerd[1532]: time="2025-01-29T12:16:44.855360858Z" level=info msg="CreateContainer within sandbox \"fc75fe7bb8ce64bfbda7188fcb060c5a58ae0995bc34278361972bf1fdda623d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:16:44.868434 containerd[1532]: time="2025-01-29T12:16:44.868391428Z" level=info msg="CreateContainer within sandbox \"fc75fe7bb8ce64bfbda7188fcb060c5a58ae0995bc34278361972bf1fdda623d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5d0db69c22f60fc14fbece1a1b7076fb5977bc6bd966578f95a159ebfbe394f0\"" Jan 29 12:16:44.871066 containerd[1532]: time="2025-01-29T12:16:44.871034318Z" level=info msg="StartContainer for \"5d0db69c22f60fc14fbece1a1b7076fb5977bc6bd966578f95a159ebfbe394f0\"" Jan 29 12:16:44.923596 containerd[1532]: time="2025-01-29T12:16:44.923539437Z" level=info msg="StartContainer for \"5d0db69c22f60fc14fbece1a1b7076fb5977bc6bd966578f95a159ebfbe394f0\" returns successfully" Jan 29 12:16:45.065630 kubelet[2700]: I0129 12:16:45.065487 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-2srtl" podStartSLOduration=0.854896478 podStartE2EDuration="2.06546972s" podCreationTimestamp="2025-01-29 12:16:43 +0000 UTC" firstStartedPulling="2025-01-29 12:16:43.639645077 +0000 UTC m=+15.716856523" lastFinishedPulling="2025-01-29 12:16:44.850218359 +0000 UTC m=+16.927429765" observedRunningTime="2025-01-29 12:16:45.06541776 +0000 UTC m=+17.142629206" watchObservedRunningTime="2025-01-29 12:16:45.06546972 +0000 UTC m=+17.142681166" Jan 29 12:16:48.638755 kubelet[2700]: I0129 12:16:48.638704 2700 topology_manager.go:215] "Topology Admit Handler" podUID="9f6a20bd-37a2-4985-9acc-6af4b3681f0a" podNamespace="calico-system" podName="calico-typha-58f47c8664-8rhnj" Jan 29 12:16:48.705389 kubelet[2700]: I0129 12:16:48.705326 2700 topology_manager.go:215] "Topology Admit Handler" podUID="86efd541-855c-4114-91ca-ec970d6cdccd" podNamespace="calico-system" podName="calico-node-29xzw" Jan 29 12:16:48.807807 kubelet[2700]: I0129 12:16:48.807750 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-xtables-lock\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.807937 kubelet[2700]: I0129 12:16:48.807903 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-policysync\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.807965 kubelet[2700]: I0129 12:16:48.807951 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-cni-net-dir\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.807999 kubelet[2700]: I0129 12:16:48.807980 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-flexvol-driver-host\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808037 kubelet[2700]: I0129 12:16:48.808023 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg6m5\" (UniqueName: \"kubernetes.io/projected/9f6a20bd-37a2-4985-9acc-6af4b3681f0a-kube-api-access-qg6m5\") pod \"calico-typha-58f47c8664-8rhnj\" (UID: \"9f6a20bd-37a2-4985-9acc-6af4b3681f0a\") " pod="calico-system/calico-typha-58f47c8664-8rhnj" Jan 29 12:16:48.808061 kubelet[2700]: I0129 12:16:48.808043 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2p25\" (UniqueName: \"kubernetes.io/projected/86efd541-855c-4114-91ca-ec970d6cdccd-kube-api-access-b2p25\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808092 kubelet[2700]: I0129 12:16:48.808060 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-cni-bin-dir\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808092 kubelet[2700]: I0129 12:16:48.808082 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-cni-log-dir\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808137 kubelet[2700]: I0129 12:16:48.808111 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86efd541-855c-4114-91ca-ec970d6cdccd-node-certs\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808137 kubelet[2700]: I0129 12:16:48.808130 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f6a20bd-37a2-4985-9acc-6af4b3681f0a-typha-certs\") pod \"calico-typha-58f47c8664-8rhnj\" (UID: \"9f6a20bd-37a2-4985-9acc-6af4b3681f0a\") " pod="calico-system/calico-typha-58f47c8664-8rhnj" Jan 29 12:16:48.808182 kubelet[2700]: I0129 12:16:48.808147 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-var-run-calico\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808406 kubelet[2700]: I0129 12:16:48.808196 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86efd541-855c-4114-91ca-ec970d6cdccd-tigera-ca-bundle\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808406 kubelet[2700]: I0129 12:16:48.808298 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f6a20bd-37a2-4985-9acc-6af4b3681f0a-tigera-ca-bundle\") pod \"calico-typha-58f47c8664-8rhnj\" (UID: \"9f6a20bd-37a2-4985-9acc-6af4b3681f0a\") " pod="calico-system/calico-typha-58f47c8664-8rhnj" Jan 29 12:16:48.808406 kubelet[2700]: I0129 12:16:48.808329 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-lib-modules\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.808406 kubelet[2700]: I0129 12:16:48.808348 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86efd541-855c-4114-91ca-ec970d6cdccd-var-lib-calico\") pod \"calico-node-29xzw\" (UID: \"86efd541-855c-4114-91ca-ec970d6cdccd\") " pod="calico-system/calico-node-29xzw" Jan 29 12:16:48.814121 kubelet[2700]: I0129 12:16:48.814078 2700 topology_manager.go:215] "Topology Admit Handler" podUID="1a277c46-b25e-4ca8-b105-0086d4736c88" podNamespace="calico-system" podName="csi-node-driver-tpz9c" Jan 29 12:16:48.816299 kubelet[2700]: E0129 12:16:48.816012 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpz9c" podUID="1a277c46-b25e-4ca8-b105-0086d4736c88" Jan 29 12:16:48.912781 kubelet[2700]: E0129 12:16:48.911008 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.912781 kubelet[2700]: W0129 12:16:48.911031 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.912781 kubelet[2700]: E0129 12:16:48.911050 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.912781 kubelet[2700]: E0129 12:16:48.912267 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.912781 kubelet[2700]: W0129 12:16:48.912306 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.912781 kubelet[2700]: E0129 12:16:48.912322 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.912781 kubelet[2700]: E0129 12:16:48.912472 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.912781 kubelet[2700]: W0129 12:16:48.912479 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.912781 kubelet[2700]: E0129 12:16:48.912487 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.913189 kubelet[2700]: E0129 12:16:48.912804 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.913189 kubelet[2700]: W0129 12:16:48.912816 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.913189 kubelet[2700]: E0129 12:16:48.912829 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.913189 kubelet[2700]: E0129 12:16:48.912999 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.913189 kubelet[2700]: W0129 12:16:48.913007 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.913189 kubelet[2700]: E0129 12:16:48.913017 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.913328 kubelet[2700]: E0129 12:16:48.913221 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.913328 kubelet[2700]: W0129 12:16:48.913231 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.913328 kubelet[2700]: E0129 12:16:48.913241 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.913454 kubelet[2700]: E0129 12:16:48.913367 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.913454 kubelet[2700]: W0129 12:16:48.913374 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.913454 kubelet[2700]: E0129 12:16:48.913381 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.913516 kubelet[2700]: E0129 12:16:48.913493 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.913516 kubelet[2700]: W0129 12:16:48.913499 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.913516 kubelet[2700]: E0129 12:16:48.913506 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.913651 kubelet[2700]: E0129 12:16:48.913614 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.913651 kubelet[2700]: W0129 12:16:48.913628 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.913651 kubelet[2700]: E0129 12:16:48.913637 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914271 kubelet[2700]: E0129 12:16:48.913759 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914271 kubelet[2700]: W0129 12:16:48.913766 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914271 kubelet[2700]: E0129 12:16:48.913787 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914271 kubelet[2700]: E0129 12:16:48.913902 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914271 kubelet[2700]: W0129 12:16:48.913909 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914271 kubelet[2700]: E0129 12:16:48.913916 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914271 kubelet[2700]: E0129 12:16:48.914024 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914271 kubelet[2700]: W0129 12:16:48.914030 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914271 kubelet[2700]: E0129 12:16:48.914036 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914271 kubelet[2700]: E0129 12:16:48.914145 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914560 kubelet[2700]: W0129 12:16:48.914151 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914560 kubelet[2700]: E0129 12:16:48.914158 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914560 kubelet[2700]: E0129 12:16:48.914332 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914560 kubelet[2700]: W0129 12:16:48.914341 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914560 kubelet[2700]: E0129 12:16:48.914349 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914560 kubelet[2700]: E0129 12:16:48.914463 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914560 kubelet[2700]: W0129 12:16:48.914469 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914560 kubelet[2700]: E0129 12:16:48.914476 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914754 kubelet[2700]: E0129 12:16:48.914571 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914754 kubelet[2700]: W0129 12:16:48.914578 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914754 kubelet[2700]: E0129 12:16:48.914584 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914754 kubelet[2700]: E0129 12:16:48.914695 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914754 kubelet[2700]: W0129 12:16:48.914701 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914754 kubelet[2700]: E0129 12:16:48.914708 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.914929 kubelet[2700]: E0129 12:16:48.914860 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.914929 kubelet[2700]: W0129 12:16:48.914868 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.914929 kubelet[2700]: E0129 12:16:48.914876 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.915018 kubelet[2700]: E0129 12:16:48.915003 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.915018 kubelet[2700]: W0129 12:16:48.915016 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.915075 kubelet[2700]: E0129 12:16:48.915023 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.915145 kubelet[2700]: E0129 12:16:48.915130 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.915145 kubelet[2700]: W0129 12:16:48.915140 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.915145 kubelet[2700]: E0129 12:16:48.915147 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.915563 kubelet[2700]: E0129 12:16:48.915300 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.915563 kubelet[2700]: W0129 12:16:48.915307 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.915563 kubelet[2700]: E0129 12:16:48.915315 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.923976 kubelet[2700]: E0129 12:16:48.923841 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.923976 kubelet[2700]: W0129 12:16:48.923859 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.923976 kubelet[2700]: E0129 12:16:48.923876 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.924274 kubelet[2700]: E0129 12:16:48.924226 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.924274 kubelet[2700]: W0129 12:16:48.924241 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.924274 kubelet[2700]: E0129 12:16:48.924252 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.926900 kubelet[2700]: E0129 12:16:48.926884 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.927070 kubelet[2700]: W0129 12:16:48.926983 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.927070 kubelet[2700]: E0129 12:16:48.927012 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.927899 kubelet[2700]: E0129 12:16:48.927874 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:48.927899 kubelet[2700]: W0129 12:16:48.927890 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:48.927982 kubelet[2700]: E0129 12:16:48.927904 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:48.946101 kubelet[2700]: E0129 12:16:48.945992 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:48.946750 containerd[1532]: time="2025-01-29T12:16:48.946712714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58f47c8664-8rhnj,Uid:9f6a20bd-37a2-4985-9acc-6af4b3681f0a,Namespace:calico-system,Attempt:0,}" Jan 29 12:16:48.971447 containerd[1532]: time="2025-01-29T12:16:48.971365626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:48.971447 containerd[1532]: time="2025-01-29T12:16:48.971421866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:48.971447 containerd[1532]: time="2025-01-29T12:16:48.971434106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:48.971623 containerd[1532]: time="2025-01-29T12:16:48.971524706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:49.010582 kubelet[2700]: E0129 12:16:49.010541 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.010582 kubelet[2700]: W0129 12:16:49.010562 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.010582 kubelet[2700]: E0129 12:16:49.010580 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.010977 kubelet[2700]: I0129 12:16:49.010607 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1a277c46-b25e-4ca8-b105-0086d4736c88-varrun\") pod \"csi-node-driver-tpz9c\" (UID: \"1a277c46-b25e-4ca8-b105-0086d4736c88\") " pod="calico-system/csi-node-driver-tpz9c" Jan 29 12:16:49.010977 kubelet[2700]: E0129 12:16:49.010833 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.010977 kubelet[2700]: W0129 12:16:49.010844 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.010977 kubelet[2700]: E0129 12:16:49.010859 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.010977 kubelet[2700]: I0129 12:16:49.010876 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a277c46-b25e-4ca8-b105-0086d4736c88-kubelet-dir\") pod \"csi-node-driver-tpz9c\" (UID: \"1a277c46-b25e-4ca8-b105-0086d4736c88\") " pod="calico-system/csi-node-driver-tpz9c" Jan 29 12:16:49.011334 kubelet[2700]: E0129 12:16:49.011055 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.011334 kubelet[2700]: W0129 12:16:49.011066 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.011334 kubelet[2700]: E0129 12:16:49.011080 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.011334 kubelet[2700]: I0129 12:16:49.011095 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a277c46-b25e-4ca8-b105-0086d4736c88-registration-dir\") pod \"csi-node-driver-tpz9c\" (UID: \"1a277c46-b25e-4ca8-b105-0086d4736c88\") " pod="calico-system/csi-node-driver-tpz9c" Jan 29 12:16:49.011334 kubelet[2700]: E0129 12:16:49.011266 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.011334 kubelet[2700]: W0129 12:16:49.011275 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.011334 kubelet[2700]: E0129 12:16:49.011285 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.011334 kubelet[2700]: I0129 12:16:49.011299 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a277c46-b25e-4ca8-b105-0086d4736c88-socket-dir\") pod \"csi-node-driver-tpz9c\" (UID: \"1a277c46-b25e-4ca8-b105-0086d4736c88\") " pod="calico-system/csi-node-driver-tpz9c" Jan 29 12:16:49.011640 kubelet[2700]: E0129 12:16:49.011445 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.011640 kubelet[2700]: W0129 12:16:49.011453 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.011640 kubelet[2700]: E0129 12:16:49.011461 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.011640 kubelet[2700]: I0129 12:16:49.011474 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7j8d\" (UniqueName: \"kubernetes.io/projected/1a277c46-b25e-4ca8-b105-0086d4736c88-kube-api-access-q7j8d\") pod \"csi-node-driver-tpz9c\" (UID: \"1a277c46-b25e-4ca8-b105-0086d4736c88\") " pod="calico-system/csi-node-driver-tpz9c" Jan 29 12:16:49.011640 kubelet[2700]: E0129 12:16:49.011474 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:49.012179 kubelet[2700]: E0129 12:16:49.011867 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.012179 kubelet[2700]: W0129 12:16:49.011879 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.012179 kubelet[2700]: E0129 12:16:49.011965 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.012179 kubelet[2700]: E0129 12:16:49.012088 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.012179 kubelet[2700]: W0129 12:16:49.012097 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.012179 kubelet[2700]: E0129 12:16:49.012174 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.012361 kubelet[2700]: E0129 12:16:49.012245 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.012361 kubelet[2700]: W0129 12:16:49.012251 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.012361 kubelet[2700]: E0129 12:16:49.012260 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.012425 kubelet[2700]: E0129 12:16:49.012395 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.012425 kubelet[2700]: W0129 12:16:49.012401 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.012425 kubelet[2700]: E0129 12:16:49.012408 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.013302 kubelet[2700]: E0129 12:16:49.012539 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.013302 kubelet[2700]: W0129 12:16:49.012550 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.013302 kubelet[2700]: E0129 12:16:49.012557 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.013302 kubelet[2700]: E0129 12:16:49.012680 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.013302 kubelet[2700]: W0129 12:16:49.012686 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.013302 kubelet[2700]: E0129 12:16:49.012693 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.013302 kubelet[2700]: E0129 12:16:49.012906 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.013302 kubelet[2700]: W0129 12:16:49.012914 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.013302 kubelet[2700]: E0129 12:16:49.012921 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.013302 kubelet[2700]: E0129 12:16:49.013080 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.014609 kubelet[2700]: W0129 12:16:49.013087 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.014609 kubelet[2700]: E0129 12:16:49.013094 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.014609 kubelet[2700]: E0129 12:16:49.013750 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.014609 kubelet[2700]: W0129 12:16:49.013762 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.014609 kubelet[2700]: E0129 12:16:49.013806 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.015223 kubelet[2700]: E0129 12:16:49.014750 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.015223 kubelet[2700]: W0129 12:16:49.014762 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.015223 kubelet[2700]: E0129 12:16:49.014796 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.015708 containerd[1532]: time="2025-01-29T12:16:49.015671073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-29xzw,Uid:86efd541-855c-4114-91ca-ec970d6cdccd,Namespace:calico-system,Attempt:0,}" Jan 29 12:16:49.016896 containerd[1532]: time="2025-01-29T12:16:49.016864996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58f47c8664-8rhnj,Uid:9f6a20bd-37a2-4985-9acc-6af4b3681f0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4a09acbe8f733bf17ae7790170bac8c2f2dc6f79fc0a57f9cc3763467ede71f\"" Jan 29 12:16:49.017592 kubelet[2700]: E0129 12:16:49.017571 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:49.020461 containerd[1532]: time="2025-01-29T12:16:49.020416286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:16:49.042567 containerd[1532]: time="2025-01-29T12:16:49.042317826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:49.042567 containerd[1532]: time="2025-01-29T12:16:49.042385266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:49.042567 containerd[1532]: time="2025-01-29T12:16:49.042409386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:49.042567 containerd[1532]: time="2025-01-29T12:16:49.042509347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:49.084586 containerd[1532]: time="2025-01-29T12:16:49.084546142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-29xzw,Uid:86efd541-855c-4114-91ca-ec970d6cdccd,Namespace:calico-system,Attempt:0,} returns sandbox id \"a251a8bc355195b596f473c9b7cf630e1dfac6f0f4dbbac57510d85c707e8437\"" Jan 29 12:16:49.085683 kubelet[2700]: E0129 12:16:49.085466 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:49.112193 kubelet[2700]: E0129 12:16:49.112165 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.112479 kubelet[2700]: W0129 12:16:49.112333 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.112479 kubelet[2700]: E0129 12:16:49.112366 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.112886 kubelet[2700]: E0129 12:16:49.112736 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.112886 kubelet[2700]: W0129 12:16:49.112752 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.112886 kubelet[2700]: E0129 12:16:49.112766 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.113060 kubelet[2700]: E0129 12:16:49.113046 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.113123 kubelet[2700]: W0129 12:16:49.113113 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.113186 kubelet[2700]: E0129 12:16:49.113176 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.113453 kubelet[2700]: E0129 12:16:49.113420 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.113453 kubelet[2700]: W0129 12:16:49.113439 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.113453 kubelet[2700]: E0129 12:16:49.113456 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.113673 kubelet[2700]: E0129 12:16:49.113651 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.113673 kubelet[2700]: W0129 12:16:49.113662 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.113729 kubelet[2700]: E0129 12:16:49.113676 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.113856 kubelet[2700]: E0129 12:16:49.113845 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.113886 kubelet[2700]: W0129 12:16:49.113857 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.113886 kubelet[2700]: E0129 12:16:49.113870 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.114082 kubelet[2700]: E0129 12:16:49.114057 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.114082 kubelet[2700]: W0129 12:16:49.114068 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.114082 kubelet[2700]: E0129 12:16:49.114080 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.114255 kubelet[2700]: E0129 12:16:49.114242 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.114255 kubelet[2700]: W0129 12:16:49.114252 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.114317 kubelet[2700]: E0129 12:16:49.114263 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.114416 kubelet[2700]: E0129 12:16:49.114405 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.114416 kubelet[2700]: W0129 12:16:49.114415 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.114489 kubelet[2700]: E0129 12:16:49.114475 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.114582 kubelet[2700]: E0129 12:16:49.114572 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.114582 kubelet[2700]: W0129 12:16:49.114581 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.114659 kubelet[2700]: E0129 12:16:49.114648 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.114746 kubelet[2700]: E0129 12:16:49.114735 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.114746 kubelet[2700]: W0129 12:16:49.114745 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.114824 kubelet[2700]: E0129 12:16:49.114798 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.115132 kubelet[2700]: E0129 12:16:49.115101 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.115132 kubelet[2700]: W0129 12:16:49.115119 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.115132 kubelet[2700]: E0129 12:16:49.115136 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.116352 kubelet[2700]: E0129 12:16:49.116326 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.116352 kubelet[2700]: W0129 12:16:49.116343 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.116425 kubelet[2700]: E0129 12:16:49.116359 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.116671 kubelet[2700]: E0129 12:16:49.116657 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.116671 kubelet[2700]: W0129 12:16:49.116671 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.116735 kubelet[2700]: E0129 12:16:49.116686 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.116947 kubelet[2700]: E0129 12:16:49.116923 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.116947 kubelet[2700]: W0129 12:16:49.116947 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.117063 kubelet[2700]: E0129 12:16:49.117033 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.117217 kubelet[2700]: E0129 12:16:49.117194 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.117217 kubelet[2700]: W0129 12:16:49.117216 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.117321 kubelet[2700]: E0129 12:16:49.117294 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.117605 kubelet[2700]: E0129 12:16:49.117569 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.117605 kubelet[2700]: W0129 12:16:49.117588 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.117690 kubelet[2700]: E0129 12:16:49.117674 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.117951 kubelet[2700]: E0129 12:16:49.117937 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.117986 kubelet[2700]: W0129 12:16:49.117951 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.117986 kubelet[2700]: E0129 12:16:49.117969 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.118344 kubelet[2700]: E0129 12:16:49.118292 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.118383 kubelet[2700]: W0129 12:16:49.118345 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.118383 kubelet[2700]: E0129 12:16:49.118360 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.118692 kubelet[2700]: E0129 12:16:49.118679 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.118692 kubelet[2700]: W0129 12:16:49.118692 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.118761 kubelet[2700]: E0129 12:16:49.118704 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.118971 kubelet[2700]: E0129 12:16:49.118956 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.119012 kubelet[2700]: W0129 12:16:49.118972 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.119113 kubelet[2700]: E0129 12:16:49.119049 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.119233 kubelet[2700]: E0129 12:16:49.119121 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.119233 kubelet[2700]: W0129 12:16:49.119128 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.119233 kubelet[2700]: E0129 12:16:49.119215 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.119402 kubelet[2700]: E0129 12:16:49.119372 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.119402 kubelet[2700]: W0129 12:16:49.119382 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.119402 kubelet[2700]: E0129 12:16:49.119390 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.119568 kubelet[2700]: E0129 12:16:49.119541 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.119568 kubelet[2700]: W0129 12:16:49.119567 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.119759 kubelet[2700]: E0129 12:16:49.119579 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.119879 kubelet[2700]: E0129 12:16:49.119858 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.119942 kubelet[2700]: W0129 12:16:49.119931 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.120012 kubelet[2700]: E0129 12:16:49.119985 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:49.134321 kubelet[2700]: E0129 12:16:49.134243 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:49.134321 kubelet[2700]: W0129 12:16:49.134265 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:49.134321 kubelet[2700]: E0129 12:16:49.134281 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:50.274444 containerd[1532]: time="2025-01-29T12:16:50.274402963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:50.276000 containerd[1532]: time="2025-01-29T12:16:50.274762323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 12:16:50.276000 containerd[1532]: time="2025-01-29T12:16:50.275638766Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:50.278720 containerd[1532]: time="2025-01-29T12:16:50.278657733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:50.279561 containerd[1532]: time="2025-01-29T12:16:50.279520816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.25906017s" Jan 29 12:16:50.279653 containerd[1532]: time="2025-01-29T12:16:50.279637656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 12:16:50.281441 containerd[1532]: time="2025-01-29T12:16:50.281414701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:16:50.299523 containerd[1532]: time="2025-01-29T12:16:50.299480867Z" level=info msg="CreateContainer within sandbox \"f4a09acbe8f733bf17ae7790170bac8c2f2dc6f79fc0a57f9cc3763467ede71f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:16:50.313189 containerd[1532]: time="2025-01-29T12:16:50.313142942Z" level=info msg="CreateContainer within sandbox \"f4a09acbe8f733bf17ae7790170bac8c2f2dc6f79fc0a57f9cc3763467ede71f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ee57e526503445fd99f22cf71aad958df8bacc5ce906f632069c2cebe33a5fc\"" Jan 29 12:16:50.313692 containerd[1532]: time="2025-01-29T12:16:50.313550023Z" level=info msg="StartContainer for \"2ee57e526503445fd99f22cf71aad958df8bacc5ce906f632069c2cebe33a5fc\"" Jan 29 12:16:50.374511 containerd[1532]: time="2025-01-29T12:16:50.374340380Z" level=info msg="StartContainer for \"2ee57e526503445fd99f22cf71aad958df8bacc5ce906f632069c2cebe33a5fc\" returns successfully" Jan 29 12:16:50.924579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296191246.mount: Deactivated successfully. Jan 29 12:16:50.994226 kubelet[2700]: E0129 12:16:50.994168 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpz9c" podUID="1a277c46-b25e-4ca8-b105-0086d4736c88" Jan 29 12:16:51.063846 kubelet[2700]: E0129 12:16:51.063818 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:51.073096 kubelet[2700]: I0129 12:16:51.073034 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58f47c8664-8rhnj" podStartSLOduration=1.812572713 podStartE2EDuration="3.073019567s" podCreationTimestamp="2025-01-29 12:16:48 +0000 UTC" firstStartedPulling="2025-01-29 12:16:49.020189845 +0000 UTC m=+21.097401291" lastFinishedPulling="2025-01-29 12:16:50.280636739 +0000 UTC m=+22.357848145" observedRunningTime="2025-01-29 12:16:51.072529286 +0000 UTC m=+23.149740732" watchObservedRunningTime="2025-01-29 12:16:51.073019567 +0000 UTC m=+23.150231013" Jan 29 12:16:51.133617 kubelet[2700]: E0129 12:16:51.133584 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.133617 kubelet[2700]: W0129 12:16:51.133608 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.133617 kubelet[2700]: E0129 12:16:51.133625 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.134087 kubelet[2700]: E0129 12:16:51.133978 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.134087 kubelet[2700]: W0129 12:16:51.133994 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.134087 kubelet[2700]: E0129 12:16:51.134005 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.134243 kubelet[2700]: E0129 12:16:51.134218 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.134243 kubelet[2700]: W0129 12:16:51.134234 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.134307 kubelet[2700]: E0129 12:16:51.134247 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.134672 kubelet[2700]: E0129 12:16:51.134650 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.134672 kubelet[2700]: W0129 12:16:51.134671 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.134798 kubelet[2700]: E0129 12:16:51.134684 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.134970 kubelet[2700]: E0129 12:16:51.134937 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.134970 kubelet[2700]: W0129 12:16:51.134953 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.134970 kubelet[2700]: E0129 12:16:51.134964 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.135370 kubelet[2700]: E0129 12:16:51.135346 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.135370 kubelet[2700]: W0129 12:16:51.135361 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.135370 kubelet[2700]: E0129 12:16:51.135375 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.135670 kubelet[2700]: E0129 12:16:51.135604 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.135670 kubelet[2700]: W0129 12:16:51.135618 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.135670 kubelet[2700]: E0129 12:16:51.135627 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.135785 kubelet[2700]: E0129 12:16:51.135762 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.135785 kubelet[2700]: W0129 12:16:51.135778 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.135843 kubelet[2700]: E0129 12:16:51.135787 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.136057 kubelet[2700]: E0129 12:16:51.135974 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.136057 kubelet[2700]: W0129 12:16:51.135986 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.136057 kubelet[2700]: E0129 12:16:51.135994 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.136166 kubelet[2700]: E0129 12:16:51.136149 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.136231 kubelet[2700]: W0129 12:16:51.136179 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.136231 kubelet[2700]: E0129 12:16:51.136189 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.136397 kubelet[2700]: E0129 12:16:51.136346 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.136397 kubelet[2700]: W0129 12:16:51.136359 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.136397 kubelet[2700]: E0129 12:16:51.136367 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.136763 kubelet[2700]: E0129 12:16:51.136579 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.136763 kubelet[2700]: W0129 12:16:51.136599 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.136763 kubelet[2700]: E0129 12:16:51.136607 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.136890 kubelet[2700]: E0129 12:16:51.136807 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.136890 kubelet[2700]: W0129 12:16:51.136817 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.136890 kubelet[2700]: E0129 12:16:51.136825 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.137059 kubelet[2700]: E0129 12:16:51.137038 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.137140 kubelet[2700]: W0129 12:16:51.137120 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.137184 kubelet[2700]: E0129 12:16:51.137143 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.137420 kubelet[2700]: E0129 12:16:51.137400 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.137420 kubelet[2700]: W0129 12:16:51.137415 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.137490 kubelet[2700]: E0129 12:16:51.137426 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.182152 containerd[1532]: time="2025-01-29T12:16:51.182022750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:51.182802 containerd[1532]: time="2025-01-29T12:16:51.182757392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 12:16:51.183972 containerd[1532]: time="2025-01-29T12:16:51.183934955Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:51.185743 containerd[1532]: time="2025-01-29T12:16:51.185711359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:51.187081 containerd[1532]: time="2025-01-29T12:16:51.187050602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 905.603301ms" Jan 29 12:16:51.187116 containerd[1532]: time="2025-01-29T12:16:51.187084402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 12:16:51.190459 containerd[1532]: time="2025-01-29T12:16:51.190425850Z" level=info msg="CreateContainer within sandbox \"a251a8bc355195b596f473c9b7cf630e1dfac6f0f4dbbac57510d85c707e8437\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:16:51.203816 containerd[1532]: time="2025-01-29T12:16:51.203751002Z" level=info msg="CreateContainer within sandbox \"a251a8bc355195b596f473c9b7cf630e1dfac6f0f4dbbac57510d85c707e8437\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa3f84180e66cda68638e37f4a6f6a23136a9c2e6bbe574907b27992a91d7a42\"" Jan 29 12:16:51.204230 containerd[1532]: time="2025-01-29T12:16:51.204202564Z" level=info msg="StartContainer for \"aa3f84180e66cda68638e37f4a6f6a23136a9c2e6bbe574907b27992a91d7a42\"" Jan 29 12:16:51.233826 kubelet[2700]: E0129 12:16:51.233800 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.233826 kubelet[2700]: W0129 12:16:51.233824 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.233979 kubelet[2700]: E0129 12:16:51.233845 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.234096 kubelet[2700]: E0129 12:16:51.234080 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.234096 kubelet[2700]: W0129 12:16:51.234093 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.234163 kubelet[2700]: E0129 12:16:51.234106 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.234323 kubelet[2700]: E0129 12:16:51.234308 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.234323 kubelet[2700]: W0129 12:16:51.234321 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.234509 kubelet[2700]: E0129 12:16:51.234335 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.234624 kubelet[2700]: E0129 12:16:51.234604 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.234682 kubelet[2700]: W0129 12:16:51.234671 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.234769 kubelet[2700]: E0129 12:16:51.234756 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.234944 kubelet[2700]: E0129 12:16:51.234929 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.234944 kubelet[2700]: W0129 12:16:51.234943 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.235010 kubelet[2700]: E0129 12:16:51.234957 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.235118 kubelet[2700]: E0129 12:16:51.235108 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.235118 kubelet[2700]: W0129 12:16:51.235118 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.235190 kubelet[2700]: E0129 12:16:51.235126 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.235281 kubelet[2700]: E0129 12:16:51.235271 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.235281 kubelet[2700]: W0129 12:16:51.235281 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.235336 kubelet[2700]: E0129 12:16:51.235291 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.235459 kubelet[2700]: E0129 12:16:51.235450 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.235459 kubelet[2700]: W0129 12:16:51.235459 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.235542 kubelet[2700]: E0129 12:16:51.235528 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.235870 kubelet[2700]: E0129 12:16:51.235857 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.235870 kubelet[2700]: W0129 12:16:51.235869 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.235968 kubelet[2700]: E0129 12:16:51.235939 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.236046 kubelet[2700]: E0129 12:16:51.236032 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.236046 kubelet[2700]: W0129 12:16:51.236043 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.236137 kubelet[2700]: E0129 12:16:51.236116 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.236202 kubelet[2700]: E0129 12:16:51.236183 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.236202 kubelet[2700]: W0129 12:16:51.236194 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.236267 kubelet[2700]: E0129 12:16:51.236215 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.236376 kubelet[2700]: E0129 12:16:51.236363 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.236425 kubelet[2700]: W0129 12:16:51.236376 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.236425 kubelet[2700]: E0129 12:16:51.236389 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.236541 kubelet[2700]: E0129 12:16:51.236529 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.236541 kubelet[2700]: W0129 12:16:51.236540 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.236587 kubelet[2700]: E0129 12:16:51.236550 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.236717 kubelet[2700]: E0129 12:16:51.236706 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.236741 kubelet[2700]: W0129 12:16:51.236717 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.236741 kubelet[2700]: E0129 12:16:51.236730 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.237063 kubelet[2700]: E0129 12:16:51.237039 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.237063 kubelet[2700]: W0129 12:16:51.237053 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.237121 kubelet[2700]: E0129 12:16:51.237066 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.237271 kubelet[2700]: E0129 12:16:51.237257 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.237271 kubelet[2700]: W0129 12:16:51.237269 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.237328 kubelet[2700]: E0129 12:16:51.237283 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.237641 kubelet[2700]: E0129 12:16:51.237627 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.237641 kubelet[2700]: W0129 12:16:51.237640 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.237688 kubelet[2700]: E0129 12:16:51.237651 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.237813 kubelet[2700]: E0129 12:16:51.237802 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:16:51.237839 kubelet[2700]: W0129 12:16:51.237813 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:16:51.237839 kubelet[2700]: E0129 12:16:51.237822 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:16:51.332798 containerd[1532]: time="2025-01-29T12:16:51.329537906Z" level=info msg="StartContainer for \"aa3f84180e66cda68638e37f4a6f6a23136a9c2e6bbe574907b27992a91d7a42\" returns successfully" Jan 29 12:16:51.382465 containerd[1532]: time="2025-01-29T12:16:51.374769375Z" level=info msg="shim disconnected" id=aa3f84180e66cda68638e37f4a6f6a23136a9c2e6bbe574907b27992a91d7a42 namespace=k8s.io Jan 29 12:16:51.382465 containerd[1532]: time="2025-01-29T12:16:51.382452714Z" level=warning msg="cleaning up after shim disconnected" id=aa3f84180e66cda68638e37f4a6f6a23136a9c2e6bbe574907b27992a91d7a42 namespace=k8s.io Jan 29 12:16:51.382465 containerd[1532]: time="2025-01-29T12:16:51.382467514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:51.923793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa3f84180e66cda68638e37f4a6f6a23136a9c2e6bbe574907b27992a91d7a42-rootfs.mount: Deactivated successfully. Jan 29 12:16:52.066997 kubelet[2700]: E0129 12:16:52.066968 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:52.067554 kubelet[2700]: E0129 12:16:52.067530 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:52.069461 containerd[1532]: time="2025-01-29T12:16:52.069239481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:16:52.993868 kubelet[2700]: E0129 12:16:52.993814 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpz9c" podUID="1a277c46-b25e-4ca8-b105-0086d4736c88" Jan 29 12:16:53.069430 kubelet[2700]: E0129 12:16:53.069401 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:54.690755 containerd[1532]: time="2025-01-29T12:16:54.690688962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:54.691669 containerd[1532]: time="2025-01-29T12:16:54.691640364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 12:16:54.692590 containerd[1532]: time="2025-01-29T12:16:54.692562766Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:54.696018 containerd[1532]: time="2025-01-29T12:16:54.695979813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:54.696698 containerd[1532]: time="2025-01-29T12:16:54.696661014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.627382373s" Jan 29 12:16:54.696698 containerd[1532]: time="2025-01-29T12:16:54.696696854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 12:16:54.701104 containerd[1532]: time="2025-01-29T12:16:54.701060223Z" level=info msg="CreateContainer within sandbox \"a251a8bc355195b596f473c9b7cf630e1dfac6f0f4dbbac57510d85c707e8437\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:16:54.715402 containerd[1532]: time="2025-01-29T12:16:54.715246731Z" level=info msg="CreateContainer within sandbox \"a251a8bc355195b596f473c9b7cf630e1dfac6f0f4dbbac57510d85c707e8437\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"994b60fa2ca63f077fc62489cf61630200ce4e100eec28efb94c5dcb9634b371\"" Jan 29 12:16:54.716694 containerd[1532]: time="2025-01-29T12:16:54.716658574Z" level=info msg="StartContainer for \"994b60fa2ca63f077fc62489cf61630200ce4e100eec28efb94c5dcb9634b371\"" Jan 29 12:16:54.766968 containerd[1532]: time="2025-01-29T12:16:54.766917554Z" level=info msg="StartContainer for \"994b60fa2ca63f077fc62489cf61630200ce4e100eec28efb94c5dcb9634b371\" returns successfully" Jan 29 12:16:54.994271 kubelet[2700]: E0129 12:16:54.994134 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpz9c" podUID="1a277c46-b25e-4ca8-b105-0086d4736c88" Jan 29 12:16:55.093010 kubelet[2700]: E0129 12:16:55.092977 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:55.402358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-994b60fa2ca63f077fc62489cf61630200ce4e100eec28efb94c5dcb9634b371-rootfs.mount: Deactivated successfully. Jan 29 12:16:55.424714 kubelet[2700]: I0129 12:16:55.424412 2700 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:16:55.448315 containerd[1532]: time="2025-01-29T12:16:55.447160611Z" level=info msg="shim disconnected" id=994b60fa2ca63f077fc62489cf61630200ce4e100eec28efb94c5dcb9634b371 namespace=k8s.io Jan 29 12:16:55.448315 containerd[1532]: time="2025-01-29T12:16:55.448107013Z" level=warning msg="cleaning up after shim disconnected" id=994b60fa2ca63f077fc62489cf61630200ce4e100eec28efb94c5dcb9634b371 namespace=k8s.io Jan 29 12:16:55.448315 containerd[1532]: time="2025-01-29T12:16:55.448120773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:55.462755 kubelet[2700]: I0129 12:16:55.459321 2700 topology_manager.go:215] "Topology Admit Handler" podUID="fb1eaa67-48a3-4aae-837a-31a50fc03ba9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-td5sx" Jan 29 12:16:55.466788 kubelet[2700]: I0129 12:16:55.466723 2700 topology_manager.go:215] "Topology Admit Handler" podUID="ae084314-8f16-437b-b454-2e1d43ea7c97" podNamespace="calico-system" podName="calico-kube-controllers-84bd7d8685-njpf7" Jan 29 12:16:55.469386 kubelet[2700]: I0129 12:16:55.469068 2700 topology_manager.go:215] "Topology Admit Handler" podUID="ae1f8eed-5d91-4993-93c9-eecda1e1b81f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b4d6f" Jan 29 12:16:55.469386 kubelet[2700]: I0129 12:16:55.469214 2700 topology_manager.go:215] "Topology Admit Handler" podUID="1a416aba-2cd9-4f5e-bf85-751955865be7" podNamespace="calico-apiserver" podName="calico-apiserver-fc6bb945f-c7sf8" Jan 29 12:16:55.477798 kubelet[2700]: I0129 12:16:55.477388 2700 topology_manager.go:215] "Topology Admit Handler" podUID="b351bd63-5769-4d05-9e8c-30ffedd0fa67" podNamespace="calico-apiserver" podName="calico-apiserver-fc6bb945f-smcsx" Jan 29 12:16:55.560742 kubelet[2700]: I0129 12:16:55.560693 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae084314-8f16-437b-b454-2e1d43ea7c97-tigera-ca-bundle\") pod \"calico-kube-controllers-84bd7d8685-njpf7\" (UID: \"ae084314-8f16-437b-b454-2e1d43ea7c97\") " pod="calico-system/calico-kube-controllers-84bd7d8685-njpf7" Jan 29 12:16:55.560905 kubelet[2700]: I0129 12:16:55.560811 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsl2m\" (UniqueName: \"kubernetes.io/projected/fb1eaa67-48a3-4aae-837a-31a50fc03ba9-kube-api-access-fsl2m\") pod \"coredns-7db6d8ff4d-td5sx\" (UID: \"fb1eaa67-48a3-4aae-837a-31a50fc03ba9\") " pod="kube-system/coredns-7db6d8ff4d-td5sx" Jan 29 12:16:55.560905 kubelet[2700]: I0129 12:16:55.560850 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx7vm\" (UniqueName: \"kubernetes.io/projected/ae084314-8f16-437b-b454-2e1d43ea7c97-kube-api-access-jx7vm\") pod \"calico-kube-controllers-84bd7d8685-njpf7\" (UID: \"ae084314-8f16-437b-b454-2e1d43ea7c97\") " pod="calico-system/calico-kube-controllers-84bd7d8685-njpf7" Jan 29 12:16:55.560905 kubelet[2700]: I0129 12:16:55.560872 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb1eaa67-48a3-4aae-837a-31a50fc03ba9-config-volume\") pod \"coredns-7db6d8ff4d-td5sx\" (UID: \"fb1eaa67-48a3-4aae-837a-31a50fc03ba9\") " pod="kube-system/coredns-7db6d8ff4d-td5sx" Jan 29 12:16:55.662184 kubelet[2700]: I0129 12:16:55.662053 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1a416aba-2cd9-4f5e-bf85-751955865be7-calico-apiserver-certs\") pod \"calico-apiserver-fc6bb945f-c7sf8\" (UID: \"1a416aba-2cd9-4f5e-bf85-751955865be7\") " pod="calico-apiserver/calico-apiserver-fc6bb945f-c7sf8" Jan 29 12:16:55.662184 kubelet[2700]: I0129 12:16:55.662097 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b351bd63-5769-4d05-9e8c-30ffedd0fa67-calico-apiserver-certs\") pod \"calico-apiserver-fc6bb945f-smcsx\" (UID: \"b351bd63-5769-4d05-9e8c-30ffedd0fa67\") " pod="calico-apiserver/calico-apiserver-fc6bb945f-smcsx" Jan 29 12:16:55.662184 kubelet[2700]: I0129 12:16:55.662118 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxbk\" (UniqueName: \"kubernetes.io/projected/ae1f8eed-5d91-4993-93c9-eecda1e1b81f-kube-api-access-cgxbk\") pod \"coredns-7db6d8ff4d-b4d6f\" (UID: \"ae1f8eed-5d91-4993-93c9-eecda1e1b81f\") " pod="kube-system/coredns-7db6d8ff4d-b4d6f" Jan 29 12:16:55.662184 kubelet[2700]: I0129 12:16:55.662150 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jnp4\" (UniqueName: \"kubernetes.io/projected/1a416aba-2cd9-4f5e-bf85-751955865be7-kube-api-access-2jnp4\") pod \"calico-apiserver-fc6bb945f-c7sf8\" (UID: \"1a416aba-2cd9-4f5e-bf85-751955865be7\") " pod="calico-apiserver/calico-apiserver-fc6bb945f-c7sf8" Jan 29 12:16:55.662184 kubelet[2700]: I0129 12:16:55.662177 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae1f8eed-5d91-4993-93c9-eecda1e1b81f-config-volume\") pod \"coredns-7db6d8ff4d-b4d6f\" (UID: \"ae1f8eed-5d91-4993-93c9-eecda1e1b81f\") " pod="kube-system/coredns-7db6d8ff4d-b4d6f" Jan 29 12:16:55.662390 kubelet[2700]: I0129 12:16:55.662196 2700 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxpq\" (UniqueName: \"kubernetes.io/projected/b351bd63-5769-4d05-9e8c-30ffedd0fa67-kube-api-access-ppxpq\") pod \"calico-apiserver-fc6bb945f-smcsx\" (UID: \"b351bd63-5769-4d05-9e8c-30ffedd0fa67\") " pod="calico-apiserver/calico-apiserver-fc6bb945f-smcsx" Jan 29 12:16:55.773681 kubelet[2700]: E0129 12:16:55.773420 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:55.774044 containerd[1532]: time="2025-01-29T12:16:55.774010701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-td5sx,Uid:fb1eaa67-48a3-4aae-837a-31a50fc03ba9,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:55.781649 containerd[1532]: time="2025-01-29T12:16:55.781611755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-smcsx,Uid:b351bd63-5769-4d05-9e8c-30ffedd0fa67,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:16:55.785564 containerd[1532]: time="2025-01-29T12:16:55.785511602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bd7d8685-njpf7,Uid:ae084314-8f16-437b-b454-2e1d43ea7c97,Namespace:calico-system,Attempt:0,}" Jan 29 12:16:55.804590 containerd[1532]: time="2025-01-29T12:16:55.804549558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-c7sf8,Uid:1a416aba-2cd9-4f5e-bf85-751955865be7,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:16:56.076957 kubelet[2700]: E0129 12:16:56.076827 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:56.079128 containerd[1532]: time="2025-01-29T12:16:56.079046540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b4d6f,Uid:ae1f8eed-5d91-4993-93c9-eecda1e1b81f,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:56.105702 kubelet[2700]: E0129 12:16:56.100204 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:56.109970 containerd[1532]: time="2025-01-29T12:16:56.109476673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:16:56.245971 containerd[1532]: time="2025-01-29T12:16:56.245926392Z" level=error msg="Failed to destroy network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.246436 containerd[1532]: time="2025-01-29T12:16:56.246405833Z" level=error msg="encountered an error cleaning up failed sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.246556 containerd[1532]: time="2025-01-29T12:16:56.246532793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bd7d8685-njpf7,Uid:ae084314-8f16-437b-b454-2e1d43ea7c97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.248449 containerd[1532]: time="2025-01-29T12:16:56.248403316Z" level=error msg="Failed to destroy network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.248736 containerd[1532]: time="2025-01-29T12:16:56.248706277Z" level=error msg="encountered an error cleaning up failed sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.248791 containerd[1532]: time="2025-01-29T12:16:56.248751197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-smcsx,Uid:b351bd63-5769-4d05-9e8c-30ffedd0fa67,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.249258 kubelet[2700]: E0129 12:16:56.249205 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.249342 kubelet[2700]: E0129 12:16:56.249325 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6bb945f-smcsx" Jan 29 12:16:56.249371 kubelet[2700]: E0129 12:16:56.249347 2700 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6bb945f-smcsx" Jan 29 12:16:56.249425 kubelet[2700]: E0129 12:16:56.249205 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.249493 kubelet[2700]: E0129 12:16:56.249471 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84bd7d8685-njpf7" Jan 29 12:16:56.249531 kubelet[2700]: E0129 12:16:56.249494 2700 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84bd7d8685-njpf7" Jan 29 12:16:56.249531 kubelet[2700]: E0129 12:16:56.249392 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fc6bb945f-smcsx_calico-apiserver(b351bd63-5769-4d05-9e8c-30ffedd0fa67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fc6bb945f-smcsx_calico-apiserver(b351bd63-5769-4d05-9e8c-30ffedd0fa67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc6bb945f-smcsx" podUID="b351bd63-5769-4d05-9e8c-30ffedd0fa67" Jan 29 12:16:56.249598 kubelet[2700]: E0129 12:16:56.249556 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84bd7d8685-njpf7_calico-system(ae084314-8f16-437b-b454-2e1d43ea7c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84bd7d8685-njpf7_calico-system(ae084314-8f16-437b-b454-2e1d43ea7c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84bd7d8685-njpf7" podUID="ae084314-8f16-437b-b454-2e1d43ea7c97" Jan 29 12:16:56.252055 containerd[1532]: time="2025-01-29T12:16:56.252017483Z" level=error msg="Failed to destroy network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.252663 containerd[1532]: time="2025-01-29T12:16:56.252564403Z" level=error msg="encountered an error cleaning up failed sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.252663 containerd[1532]: time="2025-01-29T12:16:56.252611524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b4d6f,Uid:ae1f8eed-5d91-4993-93c9-eecda1e1b81f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.252834 kubelet[2700]: E0129 12:16:56.252795 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.252883 kubelet[2700]: E0129 12:16:56.252850 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b4d6f" Jan 29 12:16:56.252883 kubelet[2700]: E0129 12:16:56.252875 2700 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b4d6f" Jan 29 12:16:56.252934 kubelet[2700]: E0129 12:16:56.252912 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b4d6f_kube-system(ae1f8eed-5d91-4993-93c9-eecda1e1b81f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b4d6f_kube-system(ae1f8eed-5d91-4993-93c9-eecda1e1b81f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b4d6f" podUID="ae1f8eed-5d91-4993-93c9-eecda1e1b81f" Jan 29 12:16:56.258525 containerd[1532]: time="2025-01-29T12:16:56.258488094Z" level=error msg="Failed to destroy network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.258965 containerd[1532]: time="2025-01-29T12:16:56.258926415Z" level=error msg="encountered an error cleaning up failed sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.259027 containerd[1532]: time="2025-01-29T12:16:56.258981695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-c7sf8,Uid:1a416aba-2cd9-4f5e-bf85-751955865be7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.259180 kubelet[2700]: E0129 12:16:56.259145 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.259231 kubelet[2700]: E0129 12:16:56.259195 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6bb945f-c7sf8" Jan 29 12:16:56.259231 kubelet[2700]: E0129 12:16:56.259213 2700 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6bb945f-c7sf8" Jan 29 12:16:56.259279 kubelet[2700]: E0129 12:16:56.259256 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fc6bb945f-c7sf8_calico-apiserver(1a416aba-2cd9-4f5e-bf85-751955865be7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fc6bb945f-c7sf8_calico-apiserver(1a416aba-2cd9-4f5e-bf85-751955865be7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc6bb945f-c7sf8" podUID="1a416aba-2cd9-4f5e-bf85-751955865be7" Jan 29 12:16:56.265097 containerd[1532]: time="2025-01-29T12:16:56.265050665Z" level=error msg="Failed to destroy network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.265423 containerd[1532]: time="2025-01-29T12:16:56.265384386Z" level=error msg="encountered an error cleaning up failed sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.265455 containerd[1532]: time="2025-01-29T12:16:56.265434106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-td5sx,Uid:fb1eaa67-48a3-4aae-837a-31a50fc03ba9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.265693 kubelet[2700]: E0129 12:16:56.265632 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:56.265732 kubelet[2700]: E0129 12:16:56.265703 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-td5sx" Jan 29 12:16:56.265732 kubelet[2700]: E0129 12:16:56.265721 2700 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-td5sx" Jan 29 12:16:56.265800 kubelet[2700]: E0129 12:16:56.265758 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-td5sx_kube-system(fb1eaa67-48a3-4aae-837a-31a50fc03ba9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-td5sx_kube-system(fb1eaa67-48a3-4aae-837a-31a50fc03ba9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-td5sx" podUID="fb1eaa67-48a3-4aae-837a-31a50fc03ba9" Jan 29 12:16:56.713433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67-shm.mount: Deactivated successfully. Jan 29 12:16:56.996655 containerd[1532]: time="2025-01-29T12:16:56.996546304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpz9c,Uid:1a277c46-b25e-4ca8-b105-0086d4736c88,Namespace:calico-system,Attempt:0,}" Jan 29 12:16:57.051389 containerd[1532]: time="2025-01-29T12:16:57.051335314Z" level=error msg="Failed to destroy network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.052421 containerd[1532]: time="2025-01-29T12:16:57.051645635Z" level=error msg="encountered an error cleaning up failed sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.052421 containerd[1532]: time="2025-01-29T12:16:57.051698315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpz9c,Uid:1a277c46-b25e-4ca8-b105-0086d4736c88,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.052508 kubelet[2700]: E0129 12:16:57.051906 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.052508 kubelet[2700]: E0129 12:16:57.051964 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpz9c" Jan 29 12:16:57.052508 kubelet[2700]: E0129 12:16:57.051981 2700 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpz9c" Jan 29 12:16:57.052599 kubelet[2700]: E0129 12:16:57.052031 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tpz9c_calico-system(1a277c46-b25e-4ca8-b105-0086d4736c88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tpz9c_calico-system(1a277c46-b25e-4ca8-b105-0086d4736c88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tpz9c" podUID="1a277c46-b25e-4ca8-b105-0086d4736c88" Jan 29 12:16:57.053536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325-shm.mount: Deactivated successfully. Jan 29 12:16:57.111476 kubelet[2700]: I0129 12:16:57.111441 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:16:57.113596 containerd[1532]: time="2025-01-29T12:16:57.112897735Z" level=info msg="StopPodSandbox for \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\"" Jan 29 12:16:57.113596 containerd[1532]: time="2025-01-29T12:16:57.113347376Z" level=info msg="Ensure that sandbox afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a in task-service has been cleanup successfully" Jan 29 12:16:57.113846 kubelet[2700]: I0129 12:16:57.113082 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:16:57.114291 containerd[1532]: time="2025-01-29T12:16:57.114243577Z" level=info msg="StopPodSandbox for \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\"" Jan 29 12:16:57.114413 containerd[1532]: time="2025-01-29T12:16:57.114383017Z" level=info msg="Ensure that sandbox d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e in task-service has been cleanup successfully" Jan 29 12:16:57.115919 kubelet[2700]: I0129 12:16:57.115536 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:16:57.117836 containerd[1532]: time="2025-01-29T12:16:57.117705143Z" level=info msg="StopPodSandbox for \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\"" Jan 29 12:16:57.117909 containerd[1532]: time="2025-01-29T12:16:57.117881943Z" level=info msg="Ensure that sandbox 8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b in task-service has been cleanup successfully" Jan 29 12:16:57.119215 kubelet[2700]: I0129 12:16:57.118983 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:16:57.124184 containerd[1532]: time="2025-01-29T12:16:57.124106313Z" level=info msg="StopPodSandbox for \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\"" Jan 29 12:16:57.124298 containerd[1532]: time="2025-01-29T12:16:57.124281714Z" level=info msg="Ensure that sandbox 78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9 in task-service has been cleanup successfully" Jan 29 12:16:57.124669 kubelet[2700]: I0129 12:16:57.124631 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:16:57.126208 containerd[1532]: time="2025-01-29T12:16:57.126147317Z" level=info msg="StopPodSandbox for \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\"" Jan 29 12:16:57.126644 containerd[1532]: time="2025-01-29T12:16:57.126583797Z" level=info msg="Ensure that sandbox 3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67 in task-service has been cleanup successfully" Jan 29 12:16:57.127623 kubelet[2700]: I0129 12:16:57.127593 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:16:57.129822 containerd[1532]: time="2025-01-29T12:16:57.129437202Z" level=info msg="StopPodSandbox for \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\"" Jan 29 12:16:57.129822 containerd[1532]: time="2025-01-29T12:16:57.129614122Z" level=info msg="Ensure that sandbox a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325 in task-service has been cleanup successfully" Jan 29 12:16:57.167919 containerd[1532]: time="2025-01-29T12:16:57.166898463Z" level=error msg="StopPodSandbox for \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\" failed" error="failed to destroy network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.172700 kubelet[2700]: E0129 12:16:57.172652 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:16:57.172842 kubelet[2700]: E0129 12:16:57.172740 2700 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e"} Jan 29 12:16:57.172842 kubelet[2700]: E0129 12:16:57.172835 2700 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a416aba-2cd9-4f5e-bf85-751955865be7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:16:57.172922 kubelet[2700]: E0129 12:16:57.172858 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a416aba-2cd9-4f5e-bf85-751955865be7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc6bb945f-c7sf8" podUID="1a416aba-2cd9-4f5e-bf85-751955865be7" Jan 29 12:16:57.178086 containerd[1532]: time="2025-01-29T12:16:57.178013802Z" level=error msg="StopPodSandbox for \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\" failed" error="failed to destroy network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.178530 kubelet[2700]: E0129 12:16:57.178382 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:16:57.178530 kubelet[2700]: E0129 12:16:57.178426 2700 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b"} Jan 29 12:16:57.178530 kubelet[2700]: E0129 12:16:57.178455 2700 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae084314-8f16-437b-b454-2e1d43ea7c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:16:57.178530 kubelet[2700]: E0129 12:16:57.178491 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae084314-8f16-437b-b454-2e1d43ea7c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84bd7d8685-njpf7" podUID="ae084314-8f16-437b-b454-2e1d43ea7c97" Jan 29 12:16:57.181756 containerd[1532]: time="2025-01-29T12:16:57.181717288Z" level=error msg="StopPodSandbox for \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\" failed" error="failed to destroy network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.182191 kubelet[2700]: E0129 12:16:57.182148 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:16:57.182482 kubelet[2700]: E0129 12:16:57.182387 2700 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9"} Jan 29 12:16:57.182482 kubelet[2700]: E0129 12:16:57.182427 2700 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b351bd63-5769-4d05-9e8c-30ffedd0fa67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:16:57.182482 kubelet[2700]: E0129 12:16:57.182458 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b351bd63-5769-4d05-9e8c-30ffedd0fa67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc6bb945f-smcsx" podUID="b351bd63-5769-4d05-9e8c-30ffedd0fa67" Jan 29 12:16:57.196813 containerd[1532]: time="2025-01-29T12:16:57.196741792Z" level=error msg="StopPodSandbox for \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\" failed" error="failed to destroy network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.197259 kubelet[2700]: E0129 12:16:57.197068 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:16:57.197259 kubelet[2700]: E0129 12:16:57.197122 2700 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a"} Jan 29 12:16:57.197259 kubelet[2700]: E0129 12:16:57.197161 2700 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae1f8eed-5d91-4993-93c9-eecda1e1b81f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:16:57.197259 kubelet[2700]: E0129 12:16:57.197186 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae1f8eed-5d91-4993-93c9-eecda1e1b81f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b4d6f" podUID="ae1f8eed-5d91-4993-93c9-eecda1e1b81f" Jan 29 12:16:57.203920 containerd[1532]: time="2025-01-29T12:16:57.203870844Z" level=error msg="StopPodSandbox for \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\" failed" error="failed to destroy network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.204408 kubelet[2700]: E0129 12:16:57.204267 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:16:57.204408 kubelet[2700]: E0129 12:16:57.204326 2700 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325"} Jan 29 12:16:57.204408 kubelet[2700]: E0129 12:16:57.204356 2700 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a277c46-b25e-4ca8-b105-0086d4736c88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:16:57.204408 kubelet[2700]: E0129 12:16:57.204380 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a277c46-b25e-4ca8-b105-0086d4736c88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tpz9c" podUID="1a277c46-b25e-4ca8-b105-0086d4736c88" Jan 29 12:16:57.213372 containerd[1532]: time="2025-01-29T12:16:57.213037459Z" level=error msg="StopPodSandbox for \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\" failed" error="failed to destroy network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:16:57.213466 kubelet[2700]: E0129 12:16:57.213250 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:16:57.213466 kubelet[2700]: E0129 12:16:57.213308 2700 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67"} Jan 29 12:16:57.213466 kubelet[2700]: E0129 12:16:57.213347 2700 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb1eaa67-48a3-4aae-837a-31a50fc03ba9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:16:57.213466 kubelet[2700]: E0129 12:16:57.213369 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb1eaa67-48a3-4aae-837a-31a50fc03ba9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-td5sx" podUID="fb1eaa67-48a3-4aae-837a-31a50fc03ba9" Jan 29 12:16:59.453127 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:41036.service - OpenSSH per-connection server daemon (10.0.0.1:41036). Jan 29 12:16:59.515534 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 41036 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:59.517070 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:59.521393 systemd-logind[1510]: New session 8 of user core. Jan 29 12:16:59.528061 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:16:59.687044 sshd[3837]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:59.690406 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:16:59.692023 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:41036.service: Deactivated successfully. Jan 29 12:16:59.694743 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:16:59.698105 systemd-logind[1510]: Removed session 8. Jan 29 12:17:00.048821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100248829.mount: Deactivated successfully. Jan 29 12:17:00.256027 containerd[1532]: time="2025-01-29T12:17:00.255969911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:00.256585 containerd[1532]: time="2025-01-29T12:17:00.256532912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 12:17:00.257291 containerd[1532]: time="2025-01-29T12:17:00.257261673Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:00.263806 containerd[1532]: time="2025-01-29T12:17:00.263759961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:00.264393 containerd[1532]: time="2025-01-29T12:17:00.264357042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.154833609s" Jan 29 12:17:00.264393 containerd[1532]: time="2025-01-29T12:17:00.264388562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 12:17:00.279427 containerd[1532]: time="2025-01-29T12:17:00.279380743Z" level=info msg="CreateContainer within sandbox \"a251a8bc355195b596f473c9b7cf630e1dfac6f0f4dbbac57510d85c707e8437\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:17:00.293216 containerd[1532]: time="2025-01-29T12:17:00.293163681Z" level=info msg="CreateContainer within sandbox \"a251a8bc355195b596f473c9b7cf630e1dfac6f0f4dbbac57510d85c707e8437\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6bdaec17f8ee4273353943aaf8c3166cb762f61f5e941f21f016b55e14f3bae4\"" Jan 29 12:17:00.294619 containerd[1532]: time="2025-01-29T12:17:00.294592683Z" level=info msg="StartContainer for \"6bdaec17f8ee4273353943aaf8c3166cb762f61f5e941f21f016b55e14f3bae4\"" Jan 29 12:17:00.432643 containerd[1532]: time="2025-01-29T12:17:00.432599229Z" level=info msg="StartContainer for \"6bdaec17f8ee4273353943aaf8c3166cb762f61f5e941f21f016b55e14f3bae4\" returns successfully" Jan 29 12:17:00.531327 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:17:00.531438 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:17:01.148523 kubelet[2700]: E0129 12:17:01.148337 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:01.167470 kubelet[2700]: I0129 12:17:01.167413 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-29xzw" podStartSLOduration=1.988228149 podStartE2EDuration="13.167398927s" podCreationTimestamp="2025-01-29 12:16:48 +0000 UTC" firstStartedPulling="2025-01-29 12:16:49.086154546 +0000 UTC m=+21.163365992" lastFinishedPulling="2025-01-29 12:17:00.265325324 +0000 UTC m=+32.342536770" observedRunningTime="2025-01-29 12:17:01.163194242 +0000 UTC m=+33.240405648" watchObservedRunningTime="2025-01-29 12:17:01.167398927 +0000 UTC m=+33.244610373" Jan 29 12:17:01.961832 kernel: bpftool[4071]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:17:02.120839 systemd-networkd[1229]: vxlan.calico: Link UP Jan 29 12:17:02.120850 systemd-networkd[1229]: vxlan.calico: Gained carrier Jan 29 12:17:02.158582 kubelet[2700]: E0129 12:17:02.158541 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:03.160092 kubelet[2700]: E0129 12:17:03.160061 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:03.630015 systemd-networkd[1229]: vxlan.calico: Gained IPv6LL Jan 29 12:17:04.697051 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:56820.service - OpenSSH per-connection server daemon (10.0.0.1:56820). Jan 29 12:17:04.738877 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 56820 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:04.740357 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:04.744536 systemd-logind[1510]: New session 9 of user core. Jan 29 12:17:04.750116 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:17:04.894143 sshd[4191]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:04.897747 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:56820.service: Deactivated successfully. Jan 29 12:17:04.900965 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:17:04.901296 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:17:04.902261 systemd-logind[1510]: Removed session 9. Jan 29 12:17:07.995077 containerd[1532]: time="2025-01-29T12:17:07.994831593Z" level=info msg="StopPodSandbox for \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\"" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.127 [INFO][4231] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.127 [INFO][4231] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" iface="eth0" netns="/var/run/netns/cni-1e97ab54-3b12-01f3-d8da-bbfc619cc80b" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.128 [INFO][4231] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" iface="eth0" netns="/var/run/netns/cni-1e97ab54-3b12-01f3-d8da-bbfc619cc80b" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.129 [INFO][4231] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" iface="eth0" netns="/var/run/netns/cni-1e97ab54-3b12-01f3-d8da-bbfc619cc80b" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.129 [INFO][4231] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.129 [INFO][4231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.262 [INFO][4239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.262 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.263 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.273 [WARNING][4239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.273 [INFO][4239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.275 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:08.279729 containerd[1532]: 2025-01-29 12:17:08.276 [INFO][4231] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:08.279729 containerd[1532]: time="2025-01-29T12:17:08.279686463Z" level=info msg="TearDown network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\" successfully" Jan 29 12:17:08.279729 containerd[1532]: time="2025-01-29T12:17:08.279721543Z" level=info msg="StopPodSandbox for \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\" returns successfully" Jan 29 12:17:08.282271 systemd[1]: run-netns-cni\x2d1e97ab54\x2d3b12\x2d01f3\x2dd8da\x2dbbfc619cc80b.mount: Deactivated successfully. Jan 29 12:17:08.283155 containerd[1532]: time="2025-01-29T12:17:08.283047946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpz9c,Uid:1a277c46-b25e-4ca8-b105-0086d4736c88,Namespace:calico-system,Attempt:1,}" Jan 29 12:17:08.408210 systemd-networkd[1229]: cali801fc5ad47a: Link UP Jan 29 12:17:08.408397 systemd-networkd[1229]: cali801fc5ad47a: Gained carrier Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.341 [INFO][4249] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tpz9c-eth0 csi-node-driver- calico-system 1a277c46-b25e-4ca8-b105-0086d4736c88 836 0 2025-01-29 12:16:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tpz9c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali801fc5ad47a [] []}} ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.341 [INFO][4249] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.368 [INFO][4262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" HandleID="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.379 [INFO][4262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" HandleID="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002856e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tpz9c", "timestamp":"2025-01-29 12:17:08.368057134 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.379 [INFO][4262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.379 [INFO][4262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.379 [INFO][4262] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.380 [INFO][4262] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.385 [INFO][4262] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.389 [INFO][4262] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.390 [INFO][4262] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.392 [INFO][4262] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.392 [INFO][4262] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.393 [INFO][4262] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4 Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.397 [INFO][4262] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.402 [INFO][4262] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.402 [INFO][4262] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" host="localhost" Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.402 [INFO][4262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:08.422995 containerd[1532]: 2025-01-29 12:17:08.402 [INFO][4262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" HandleID="k8s-pod-network.9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.427551 containerd[1532]: 2025-01-29 12:17:08.403 [INFO][4249] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tpz9c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a277c46-b25e-4ca8-b105-0086d4736c88", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tpz9c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali801fc5ad47a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:08.427551 containerd[1532]: 2025-01-29 12:17:08.404 [INFO][4249] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.427551 containerd[1532]: 2025-01-29 12:17:08.404 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali801fc5ad47a ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.427551 containerd[1532]: 2025-01-29 12:17:08.410 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.427551 containerd[1532]: 2025-01-29 12:17:08.410 [INFO][4249] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tpz9c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a277c46-b25e-4ca8-b105-0086d4736c88", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4", Pod:"csi-node-driver-tpz9c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali801fc5ad47a", MAC:"1e:96:ae:b3:87:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:08.427551 containerd[1532]: 2025-01-29 12:17:08.419 [INFO][4249] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4" Namespace="calico-system" Pod="csi-node-driver-tpz9c" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:08.464978 containerd[1532]: time="2025-01-29T12:17:08.464880172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:17:08.464978 containerd[1532]: time="2025-01-29T12:17:08.464955053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:17:08.464978 containerd[1532]: time="2025-01-29T12:17:08.464966333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:08.465209 containerd[1532]: time="2025-01-29T12:17:08.465091973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:08.485341 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:17:08.496547 containerd[1532]: time="2025-01-29T12:17:08.496471798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpz9c,Uid:1a277c46-b25e-4ca8-b105-0086d4736c88,Namespace:calico-system,Attempt:1,} returns sandbox id \"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4\"" Jan 29 12:17:08.498834 containerd[1532]: time="2025-01-29T12:17:08.498201959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:17:09.524012 containerd[1532]: time="2025-01-29T12:17:09.523959879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:09.524598 containerd[1532]: time="2025-01-29T12:17:09.524562800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 12:17:09.525637 containerd[1532]: time="2025-01-29T12:17:09.525610881Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:09.527807 containerd[1532]: time="2025-01-29T12:17:09.527594442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:09.528415 containerd[1532]: time="2025-01-29T12:17:09.528378083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.030135204s" Jan 29 12:17:09.528415 containerd[1532]: time="2025-01-29T12:17:09.528409803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 12:17:09.532165 containerd[1532]: time="2025-01-29T12:17:09.532132206Z" level=info msg="CreateContainer within sandbox \"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:17:09.684404 containerd[1532]: time="2025-01-29T12:17:09.684334201Z" level=info msg="CreateContainer within sandbox \"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"15506d4e6992bbce2216f2a358b43c433b206f3023b8b4139b648d8bf9fa45ac\"" Jan 29 12:17:09.685229 containerd[1532]: time="2025-01-29T12:17:09.685200041Z" level=info msg="StartContainer for \"15506d4e6992bbce2216f2a358b43c433b206f3023b8b4139b648d8bf9fa45ac\"" Jan 29 12:17:09.727401 containerd[1532]: time="2025-01-29T12:17:09.727363153Z" level=info msg="StartContainer for \"15506d4e6992bbce2216f2a358b43c433b206f3023b8b4139b648d8bf9fa45ac\" returns successfully" Jan 29 12:17:09.729170 containerd[1532]: time="2025-01-29T12:17:09.729128994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:17:09.905043 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:56834.service - OpenSSH per-connection server daemon (10.0.0.1:56834). Jan 29 12:17:09.943815 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 56834 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:09.945379 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:09.949157 systemd-logind[1510]: New session 10 of user core. Jan 29 12:17:09.959059 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:17:09.994425 containerd[1532]: time="2025-01-29T12:17:09.994380155Z" level=info msg="StopPodSandbox for \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\"" Jan 29 12:17:09.994699 containerd[1532]: time="2025-01-29T12:17:09.994675675Z" level=info msg="StopPodSandbox for \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\"" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.053 [INFO][4408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.053 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" iface="eth0" netns="/var/run/netns/cni-f59a1155-3637-e075-787f-613263f442b5" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.053 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" iface="eth0" netns="/var/run/netns/cni-f59a1155-3637-e075-787f-613263f442b5" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.053 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" iface="eth0" netns="/var/run/netns/cni-f59a1155-3637-e075-787f-613263f442b5" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.053 [INFO][4408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.053 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.080 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.080 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.081 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.099 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.099 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.101 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:10.105822 containerd[1532]: 2025-01-29 12:17:10.104 [INFO][4408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:10.109505 systemd[1]: run-netns-cni\x2df59a1155\x2d3637\x2de075\x2d787f\x2d613263f442b5.mount: Deactivated successfully. Jan 29 12:17:10.110169 containerd[1532]: time="2025-01-29T12:17:10.109917717Z" level=info msg="TearDown network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\" successfully" Jan 29 12:17:10.110169 containerd[1532]: time="2025-01-29T12:17:10.109956157Z" level=info msg="StopPodSandbox for \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\" returns successfully" Jan 29 12:17:10.111630 kubelet[2700]: E0129 12:17:10.111606 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:10.114795 containerd[1532]: time="2025-01-29T12:17:10.112928039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b4d6f,Uid:ae1f8eed-5d91-4993-93c9-eecda1e1b81f,Namespace:kube-system,Attempt:1,}" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.065 [INFO][4402] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.065 [INFO][4402] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" iface="eth0" netns="/var/run/netns/cni-c30846e2-57c0-c585-0873-bd19d7d1dc34" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.065 [INFO][4402] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" iface="eth0" netns="/var/run/netns/cni-c30846e2-57c0-c585-0873-bd19d7d1dc34" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.066 [INFO][4402] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" iface="eth0" netns="/var/run/netns/cni-c30846e2-57c0-c585-0873-bd19d7d1dc34" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.066 [INFO][4402] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.066 [INFO][4402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.099 [INFO][4433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.099 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.101 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.112 [WARNING][4433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.112 [INFO][4433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.117 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:10.122818 containerd[1532]: 2025-01-29 12:17:10.119 [INFO][4402] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:10.123161 containerd[1532]: time="2025-01-29T12:17:10.123037486Z" level=info msg="TearDown network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\" successfully" Jan 29 12:17:10.123161 containerd[1532]: time="2025-01-29T12:17:10.123105166Z" level=info msg="StopPodSandbox for \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\" returns successfully" Jan 29 12:17:10.126225 containerd[1532]: time="2025-01-29T12:17:10.125549728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-c7sf8,Uid:1a416aba-2cd9-4f5e-bf85-751955865be7,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:17:10.134169 sshd[4366]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:10.141189 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:56850.service - OpenSSH per-connection server daemon (10.0.0.1:56850). Jan 29 12:17:10.141589 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:56834.service: Deactivated successfully. Jan 29 12:17:10.145328 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:17:10.146672 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:17:10.148439 systemd-logind[1510]: Removed session 10. Jan 29 12:17:10.181826 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 56850 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:10.185215 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:10.193257 systemd-logind[1510]: New session 11 of user core. Jan 29 12:17:10.201100 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:17:10.273765 systemd-networkd[1229]: cali5765cb2417c: Link UP Jan 29 12:17:10.276931 systemd-networkd[1229]: cali5765cb2417c: Gained carrier Jan 29 12:17:10.285901 systemd-networkd[1229]: cali801fc5ad47a: Gained IPv6LL Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.185 [INFO][4447] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0 coredns-7db6d8ff4d- kube-system ae1f8eed-5d91-4993-93c9-eecda1e1b81f 856 0 2025-01-29 12:16:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-b4d6f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5765cb2417c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.185 [INFO][4447] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.220 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" HandleID="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.236 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" HandleID="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000492650), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-b4d6f", "timestamp":"2025-01-29 12:17:10.220297355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.237 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.237 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.237 [INFO][4476] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.239 [INFO][4476] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.243 [INFO][4476] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.248 [INFO][4476] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.249 [INFO][4476] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.253 [INFO][4476] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.253 [INFO][4476] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.254 [INFO][4476] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796 Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.259 [INFO][4476] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.266 [INFO][4476] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.266 [INFO][4476] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" host="localhost" Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.266 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:10.293819 containerd[1532]: 2025-01-29 12:17:10.266 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" HandleID="k8s-pod-network.b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.294355 containerd[1532]: 2025-01-29 12:17:10.268 [INFO][4447] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae1f8eed-5d91-4993-93c9-eecda1e1b81f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-b4d6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5765cb2417c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:10.294355 containerd[1532]: 2025-01-29 12:17:10.268 [INFO][4447] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.294355 containerd[1532]: 2025-01-29 12:17:10.268 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5765cb2417c ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.294355 containerd[1532]: 2025-01-29 12:17:10.274 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.294355 containerd[1532]: 2025-01-29 12:17:10.277 [INFO][4447] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae1f8eed-5d91-4993-93c9-eecda1e1b81f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796", Pod:"coredns-7db6d8ff4d-b4d6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5765cb2417c", MAC:"0a:b3:d4:8e:8e:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:10.294355 containerd[1532]: 2025-01-29 12:17:10.291 [INFO][4447] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b4d6f" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:10.318533 systemd-networkd[1229]: cali6b80e296b3c: Link UP Jan 29 12:17:10.319797 systemd-networkd[1229]: cali6b80e296b3c: Gained carrier Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.191 [INFO][4459] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0 calico-apiserver-fc6bb945f- calico-apiserver 1a416aba-2cd9-4f5e-bf85-751955865be7 857 0 2025-01-29 12:16:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc6bb945f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fc6bb945f-c7sf8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b80e296b3c [] []}} ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.192 [INFO][4459] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.220 [INFO][4481] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" HandleID="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.241 [INFO][4481] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" HandleID="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fc6bb945f-c7sf8", "timestamp":"2025-01-29 12:17:10.220286555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.241 [INFO][4481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.266 [INFO][4481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.266 [INFO][4481] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.270 [INFO][4481] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.276 [INFO][4481] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.282 [INFO][4481] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.288 [INFO][4481] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.292 [INFO][4481] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.292 [INFO][4481] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.294 [INFO][4481] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.302 [INFO][4481] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.311 [INFO][4481] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.311 [INFO][4481] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" host="localhost" Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.311 [INFO][4481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:10.336651 containerd[1532]: 2025-01-29 12:17:10.311 [INFO][4481] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" HandleID="k8s-pod-network.ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.337951 containerd[1532]: 2025-01-29 12:17:10.314 [INFO][4459] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a416aba-2cd9-4f5e-bf85-751955865be7", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fc6bb945f-c7sf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b80e296b3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:10.337951 containerd[1532]: 2025-01-29 12:17:10.314 [INFO][4459] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.337951 containerd[1532]: 2025-01-29 12:17:10.314 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b80e296b3c ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.337951 containerd[1532]: 2025-01-29 12:17:10.318 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.337951 containerd[1532]: 2025-01-29 12:17:10.318 [INFO][4459] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a416aba-2cd9-4f5e-bf85-751955865be7", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b", Pod:"calico-apiserver-fc6bb945f-c7sf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b80e296b3c", MAC:"22:06:9c:1e:f4:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:10.337951 containerd[1532]: 2025-01-29 12:17:10.331 [INFO][4459] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-c7sf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:10.337951 containerd[1532]: time="2025-01-29T12:17:10.337036398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:17:10.337951 containerd[1532]: time="2025-01-29T12:17:10.337111878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:17:10.337951 containerd[1532]: time="2025-01-29T12:17:10.337126078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:10.337951 containerd[1532]: time="2025-01-29T12:17:10.337224918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:10.365115 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:17:10.368631 containerd[1532]: time="2025-01-29T12:17:10.368543300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:17:10.368631 containerd[1532]: time="2025-01-29T12:17:10.368596580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:17:10.368631 containerd[1532]: time="2025-01-29T12:17:10.368607660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:10.368800 containerd[1532]: time="2025-01-29T12:17:10.368679060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:10.398434 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:17:10.405452 containerd[1532]: time="2025-01-29T12:17:10.405395606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b4d6f,Uid:ae1f8eed-5d91-4993-93c9-eecda1e1b81f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796\"" Jan 29 12:17:10.406831 kubelet[2700]: E0129 12:17:10.406270 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:10.411627 containerd[1532]: time="2025-01-29T12:17:10.411583251Z" level=info msg="CreateContainer within sandbox \"b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:17:10.425140 sshd[4444]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:10.436415 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:56866.service - OpenSSH per-connection server daemon (10.0.0.1:56866). Jan 29 12:17:10.439414 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:56850.service: Deactivated successfully. Jan 29 12:17:10.442817 containerd[1532]: time="2025-01-29T12:17:10.442605632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-c7sf8,Uid:1a416aba-2cd9-4f5e-bf85-751955865be7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b\"" Jan 29 12:17:10.447234 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:17:10.452917 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:17:10.457369 systemd-logind[1510]: Removed session 11. Jan 29 12:17:10.459604 containerd[1532]: time="2025-01-29T12:17:10.459562125Z" level=info msg="CreateContainer within sandbox \"b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"933c45a3b931544af64d8bda516c9b95d7e6b78d20bdca6c31f1e14e705a0579\"" Jan 29 12:17:10.462441 containerd[1532]: time="2025-01-29T12:17:10.461453486Z" level=info msg="StartContainer for \"933c45a3b931544af64d8bda516c9b95d7e6b78d20bdca6c31f1e14e705a0579\"" Jan 29 12:17:10.499746 sshd[4611]: Accepted publickey for core from 10.0.0.1 port 56866 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:10.501397 sshd[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:10.507582 systemd-logind[1510]: New session 12 of user core. Jan 29 12:17:10.515103 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:17:10.530523 containerd[1532]: time="2025-01-29T12:17:10.530476575Z" level=info msg="StartContainer for \"933c45a3b931544af64d8bda516c9b95d7e6b78d20bdca6c31f1e14e705a0579\" returns successfully" Jan 29 12:17:10.558225 systemd[1]: run-netns-cni\x2dc30846e2\x2d57c0\x2dc585\x2d0873\x2dbd19d7d1dc34.mount: Deactivated successfully. Jan 29 12:17:10.677957 sshd[4611]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:10.682121 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:56866.service: Deactivated successfully. Jan 29 12:17:10.685117 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:17:10.685328 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:17:10.687561 systemd-logind[1510]: Removed session 12. Jan 29 12:17:10.995086 containerd[1532]: time="2025-01-29T12:17:10.995034984Z" level=info msg="StopPodSandbox for \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\"" Jan 29 12:17:11.049515 containerd[1532]: time="2025-01-29T12:17:11.048940740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:11.052227 containerd[1532]: time="2025-01-29T12:17:11.052186302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 12:17:11.055647 containerd[1532]: time="2025-01-29T12:17:11.055604184Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:11.061341 containerd[1532]: time="2025-01-29T12:17:11.059566507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:11.061434 containerd[1532]: time="2025-01-29T12:17:11.060304267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.331135793s" Jan 29 12:17:11.061464 containerd[1532]: time="2025-01-29T12:17:11.061442068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 12:17:11.065724 containerd[1532]: time="2025-01-29T12:17:11.065684951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:17:11.068052 containerd[1532]: time="2025-01-29T12:17:11.067932712Z" level=info msg="CreateContainer within sandbox \"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.041 [INFO][4686] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.042 [INFO][4686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" iface="eth0" netns="/var/run/netns/cni-3b92345b-4bda-a9b8-db6f-b62bbd842eb2" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.043 [INFO][4686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" iface="eth0" netns="/var/run/netns/cni-3b92345b-4bda-a9b8-db6f-b62bbd842eb2" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.043 [INFO][4686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" iface="eth0" netns="/var/run/netns/cni-3b92345b-4bda-a9b8-db6f-b62bbd842eb2" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.043 [INFO][4686] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.043 [INFO][4686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.071 [INFO][4693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.071 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.071 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.080 [WARNING][4693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.080 [INFO][4693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.081 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:11.085216 containerd[1532]: 2025-01-29 12:17:11.083 [INFO][4686] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:11.089257 containerd[1532]: time="2025-01-29T12:17:11.087388205Z" level=info msg="TearDown network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\" successfully" Jan 29 12:17:11.089257 containerd[1532]: time="2025-01-29T12:17:11.087424565Z" level=info msg="StopPodSandbox for \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\" returns successfully" Jan 29 12:17:11.090001 systemd[1]: run-netns-cni\x2d3b92345b\x2d4bda\x2da9b8\x2ddb6f\x2db62bbd842eb2.mount: Deactivated successfully. Jan 29 12:17:11.092113 containerd[1532]: time="2025-01-29T12:17:11.090476927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bd7d8685-njpf7,Uid:ae084314-8f16-437b-b454-2e1d43ea7c97,Namespace:calico-system,Attempt:1,}" Jan 29 12:17:11.093463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103034930.mount: Deactivated successfully. Jan 29 12:17:11.121441 containerd[1532]: time="2025-01-29T12:17:11.121226348Z" level=info msg="CreateContainer within sandbox \"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"034b7c11f2f86501a79ac76d614f95f60bd614dc6a28d19d5f6a71f76eb78cb0\"" Jan 29 12:17:11.121767 containerd[1532]: time="2025-01-29T12:17:11.121731668Z" level=info msg="StartContainer for \"034b7c11f2f86501a79ac76d614f95f60bd614dc6a28d19d5f6a71f76eb78cb0\"" Jan 29 12:17:11.176580 containerd[1532]: time="2025-01-29T12:17:11.175492024Z" level=info msg="StartContainer for \"034b7c11f2f86501a79ac76d614f95f60bd614dc6a28d19d5f6a71f76eb78cb0\" returns successfully" Jan 29 12:17:11.192815 kubelet[2700]: E0129 12:17:11.192760 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:11.202973 kubelet[2700]: I0129 12:17:11.202669 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tpz9c" podStartSLOduration=20.63504741 podStartE2EDuration="23.202651682s" podCreationTimestamp="2025-01-29 12:16:48 +0000 UTC" firstStartedPulling="2025-01-29 12:17:08.497759719 +0000 UTC m=+40.574971125" lastFinishedPulling="2025-01-29 12:17:11.065363991 +0000 UTC m=+43.142575397" observedRunningTime="2025-01-29 12:17:11.202323602 +0000 UTC m=+43.279535128" watchObservedRunningTime="2025-01-29 12:17:11.202651682 +0000 UTC m=+43.279863128" Jan 29 12:17:11.225147 kubelet[2700]: I0129 12:17:11.220006 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b4d6f" podStartSLOduration=28.219987293 podStartE2EDuration="28.219987293s" podCreationTimestamp="2025-01-29 12:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:17:11.219795333 +0000 UTC m=+43.297006779" watchObservedRunningTime="2025-01-29 12:17:11.219987293 +0000 UTC m=+43.297198739" Jan 29 12:17:11.306941 systemd-networkd[1229]: cali35e9493b223: Link UP Jan 29 12:17:11.307170 systemd-networkd[1229]: cali35e9493b223: Gained carrier Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.217 [INFO][4736] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0 calico-kube-controllers-84bd7d8685- calico-system ae084314-8f16-437b-b454-2e1d43ea7c97 885 0 2025-01-29 12:16:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84bd7d8685 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84bd7d8685-njpf7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali35e9493b223 [] []}} ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.217 [INFO][4736] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.257 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" HandleID="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.267 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" HandleID="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000433540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84bd7d8685-njpf7", "timestamp":"2025-01-29 12:17:11.257087998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.267 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.267 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.267 [INFO][4751] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.269 [INFO][4751] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.273 [INFO][4751] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.277 [INFO][4751] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.278 [INFO][4751] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.281 [INFO][4751] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.282 [INFO][4751] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.283 [INFO][4751] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2 Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.287 [INFO][4751] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.299 [INFO][4751] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.300 [INFO][4751] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" host="localhost" Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.300 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:11.319983 containerd[1532]: 2025-01-29 12:17:11.300 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" HandleID="k8s-pod-network.64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.320750 containerd[1532]: 2025-01-29 12:17:11.302 [INFO][4736] cni-plugin/k8s.go 386: Populated endpoint ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0", GenerateName:"calico-kube-controllers-84bd7d8685-", Namespace:"calico-system", SelfLink:"", UID:"ae084314-8f16-437b-b454-2e1d43ea7c97", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bd7d8685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84bd7d8685-njpf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35e9493b223", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:11.320750 containerd[1532]: 2025-01-29 12:17:11.302 [INFO][4736] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.320750 containerd[1532]: 2025-01-29 12:17:11.302 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35e9493b223 ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.320750 containerd[1532]: 2025-01-29 12:17:11.306 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.320750 containerd[1532]: 2025-01-29 12:17:11.306 [INFO][4736] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0", GenerateName:"calico-kube-controllers-84bd7d8685-", Namespace:"calico-system", SelfLink:"", UID:"ae084314-8f16-437b-b454-2e1d43ea7c97", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bd7d8685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2", Pod:"calico-kube-controllers-84bd7d8685-njpf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35e9493b223", MAC:"de:e0:a6:07:1f:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:11.320750 containerd[1532]: 2025-01-29 12:17:11.317 [INFO][4736] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2" Namespace="calico-system" Pod="calico-kube-controllers-84bd7d8685-njpf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:11.343282 containerd[1532]: time="2025-01-29T12:17:11.342742215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:17:11.343469 containerd[1532]: time="2025-01-29T12:17:11.343432815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:17:11.343541 containerd[1532]: time="2025-01-29T12:17:11.343459655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:11.343732 containerd[1532]: time="2025-01-29T12:17:11.343683335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:11.368194 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:17:11.385953 containerd[1532]: time="2025-01-29T12:17:11.385909843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bd7d8685-njpf7,Uid:ae084314-8f16-437b-b454-2e1d43ea7c97,Namespace:calico-system,Attempt:1,} returns sandbox id \"64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2\"" Jan 29 12:17:11.437920 systemd-networkd[1229]: cali5765cb2417c: Gained IPv6LL Jan 29 12:17:11.886029 systemd-networkd[1229]: cali6b80e296b3c: Gained IPv6LL Jan 29 12:17:11.994763 containerd[1532]: time="2025-01-29T12:17:11.994721488Z" level=info msg="StopPodSandbox for \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\"" Jan 29 12:17:11.995176 containerd[1532]: time="2025-01-29T12:17:11.995088528Z" level=info msg="StopPodSandbox for \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\"" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.058 [INFO][4852] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.058 [INFO][4852] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" iface="eth0" netns="/var/run/netns/cni-5c012c48-7c7e-9290-46c0-e1f6ba80da50" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.059 [INFO][4852] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" iface="eth0" netns="/var/run/netns/cni-5c012c48-7c7e-9290-46c0-e1f6ba80da50" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.059 [INFO][4852] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" iface="eth0" netns="/var/run/netns/cni-5c012c48-7c7e-9290-46c0-e1f6ba80da50" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.059 [INFO][4852] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.059 [INFO][4852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.100 [INFO][4869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.101 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.102 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.116 [WARNING][4869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.116 [INFO][4869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.119 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:12.124632 containerd[1532]: 2025-01-29 12:17:12.121 [INFO][4852] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:12.125235 containerd[1532]: time="2025-01-29T12:17:12.124734969Z" level=info msg="TearDown network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\" successfully" Jan 29 12:17:12.125235 containerd[1532]: time="2025-01-29T12:17:12.124762609Z" level=info msg="StopPodSandbox for \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\" returns successfully" Jan 29 12:17:12.126032 kubelet[2700]: E0129 12:17:12.125557 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:12.126335 containerd[1532]: time="2025-01-29T12:17:12.126309730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-td5sx,Uid:fb1eaa67-48a3-4aae-837a-31a50fc03ba9,Namespace:kube-system,Attempt:1,}" Jan 29 12:17:12.129748 kubelet[2700]: I0129 12:17:12.129706 2700 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:17:12.134647 systemd[1]: run-netns-cni\x2d5c012c48\x2d7c7e\x2d9290\x2d46c0\x2de1f6ba80da50.mount: Deactivated successfully. Jan 29 12:17:12.139714 kubelet[2700]: I0129 12:17:12.139470 2700 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.060 [INFO][4853] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.060 [INFO][4853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" iface="eth0" netns="/var/run/netns/cni-f4097aad-545f-c77e-57c9-cc8e0b9c4872" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.060 [INFO][4853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" iface="eth0" netns="/var/run/netns/cni-f4097aad-545f-c77e-57c9-cc8e0b9c4872" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.061 [INFO][4853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" iface="eth0" netns="/var/run/netns/cni-f4097aad-545f-c77e-57c9-cc8e0b9c4872" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.061 [INFO][4853] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.061 [INFO][4853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.149 [INFO][4868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.149 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.149 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.171 [WARNING][4868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.171 [INFO][4868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.176 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:12.183908 containerd[1532]: 2025-01-29 12:17:12.180 [INFO][4853] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:12.184528 containerd[1532]: time="2025-01-29T12:17:12.184019406Z" level=info msg="TearDown network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\" successfully" Jan 29 12:17:12.184528 containerd[1532]: time="2025-01-29T12:17:12.184051006Z" level=info msg="StopPodSandbox for \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\" returns successfully" Jan 29 12:17:12.186858 containerd[1532]: time="2025-01-29T12:17:12.184652566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-smcsx,Uid:b351bd63-5769-4d05-9e8c-30ffedd0fa67,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:17:12.186690 systemd[1]: run-netns-cni\x2df4097aad\x2d545f\x2dc77e\x2d57c9\x2dcc8e0b9c4872.mount: Deactivated successfully. Jan 29 12:17:12.198575 kubelet[2700]: E0129 12:17:12.198427 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:12.313511 systemd-networkd[1229]: calia633430396a: Link UP Jan 29 12:17:12.315100 systemd-networkd[1229]: calia633430396a: Gained carrier Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.220 [INFO][4885] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0 coredns-7db6d8ff4d- kube-system fb1eaa67-48a3-4aae-837a-31a50fc03ba9 919 0 2025-01-29 12:16:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-td5sx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia633430396a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.220 [INFO][4885] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.252 [INFO][4912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" HandleID="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.266 [INFO][4912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" HandleID="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-td5sx", "timestamp":"2025-01-29 12:17:12.252743248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.266 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.266 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.266 [INFO][4912] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.269 [INFO][4912] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.273 [INFO][4912] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.279 [INFO][4912] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.282 [INFO][4912] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.285 [INFO][4912] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.285 [INFO][4912] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.289 [INFO][4912] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639 Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.295 [INFO][4912] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.302 [INFO][4912] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.303 [INFO][4912] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" host="localhost" Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.303 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:12.328906 containerd[1532]: 2025-01-29 12:17:12.303 [INFO][4912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" HandleID="k8s-pod-network.d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.330410 containerd[1532]: 2025-01-29 12:17:12.307 [INFO][4885] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb1eaa67-48a3-4aae-837a-31a50fc03ba9", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-td5sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia633430396a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:12.330410 containerd[1532]: 2025-01-29 12:17:12.307 [INFO][4885] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.330410 containerd[1532]: 2025-01-29 12:17:12.307 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia633430396a ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.330410 containerd[1532]: 2025-01-29 12:17:12.311 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.330410 containerd[1532]: 2025-01-29 12:17:12.312 [INFO][4885] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb1eaa67-48a3-4aae-837a-31a50fc03ba9", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639", Pod:"coredns-7db6d8ff4d-td5sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia633430396a", MAC:"aa:df:71:01:58:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:12.330410 containerd[1532]: 2025-01-29 12:17:12.326 [INFO][4885] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639" Namespace="kube-system" Pod="coredns-7db6d8ff4d-td5sx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:12.365713 systemd-networkd[1229]: cali00ce617bff6: Link UP Jan 29 12:17:12.366401 systemd-networkd[1229]: cali00ce617bff6: Gained carrier Jan 29 12:17:12.381946 containerd[1532]: time="2025-01-29T12:17:12.381850089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:17:12.381946 containerd[1532]: time="2025-01-29T12:17:12.381907089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:17:12.382318 containerd[1532]: time="2025-01-29T12:17:12.381927689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:12.382318 containerd[1532]: time="2025-01-29T12:17:12.382237169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:12.419712 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.238 [INFO][4896] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0 calico-apiserver-fc6bb945f- calico-apiserver b351bd63-5769-4d05-9e8c-30ffedd0fa67 920 0 2025-01-29 12:16:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc6bb945f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fc6bb945f-smcsx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali00ce617bff6 [] []}} ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.239 [INFO][4896] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.281 [INFO][4918] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" HandleID="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.294 [INFO][4918] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" HandleID="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027bbd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fc6bb945f-smcsx", "timestamp":"2025-01-29 12:17:12.281224026 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.294 [INFO][4918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.303 [INFO][4918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.303 [INFO][4918] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.311 [INFO][4918] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.323 [INFO][4918] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.335 [INFO][4918] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.340 [INFO][4918] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.343 [INFO][4918] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.343 [INFO][4918] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.344 [INFO][4918] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8 Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.354 [INFO][4918] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.361 [INFO][4918] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.361 [INFO][4918] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" host="localhost" Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.361 [INFO][4918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:12.420256 containerd[1532]: 2025-01-29 12:17:12.361 [INFO][4918] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" HandleID="k8s-pod-network.5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.421168 containerd[1532]: 2025-01-29 12:17:12.363 [INFO][4896] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b351bd63-5769-4d05-9e8c-30ffedd0fa67", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fc6bb945f-smcsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00ce617bff6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:12.421168 containerd[1532]: 2025-01-29 12:17:12.363 [INFO][4896] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.421168 containerd[1532]: 2025-01-29 12:17:12.363 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00ce617bff6 ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.421168 containerd[1532]: 2025-01-29 12:17:12.366 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.421168 containerd[1532]: 2025-01-29 12:17:12.366 [INFO][4896] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b351bd63-5769-4d05-9e8c-30ffedd0fa67", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8", Pod:"calico-apiserver-fc6bb945f-smcsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00ce617bff6", MAC:"0a:2e:1f:55:69:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:12.421168 containerd[1532]: 2025-01-29 12:17:12.415 [INFO][4896] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8" Namespace="calico-apiserver" Pod="calico-apiserver-fc6bb945f-smcsx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:12.443617 containerd[1532]: time="2025-01-29T12:17:12.443549287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-td5sx,Uid:fb1eaa67-48a3-4aae-837a-31a50fc03ba9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639\"" Jan 29 12:17:12.445062 kubelet[2700]: E0129 12:17:12.445028 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:12.447724 containerd[1532]: time="2025-01-29T12:17:12.447697450Z" level=info msg="CreateContainer within sandbox \"d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:17:12.462895 containerd[1532]: time="2025-01-29T12:17:12.462767219Z" level=info msg="CreateContainer within sandbox \"d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"28b52728035c5d25d2b0b30ed1c645358a2ff53a595948e20636cbb2f311bd22\"" Jan 29 12:17:12.463212 containerd[1532]: time="2025-01-29T12:17:12.461511298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:17:12.463212 containerd[1532]: time="2025-01-29T12:17:12.463169499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:17:12.463212 containerd[1532]: time="2025-01-29T12:17:12.463181419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:12.463428 containerd[1532]: time="2025-01-29T12:17:12.463262099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:17:12.463488 containerd[1532]: time="2025-01-29T12:17:12.463444580Z" level=info msg="StartContainer for \"28b52728035c5d25d2b0b30ed1c645358a2ff53a595948e20636cbb2f311bd22\"" Jan 29 12:17:12.492557 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:17:12.515203 containerd[1532]: time="2025-01-29T12:17:12.515159932Z" level=info msg="StartContainer for \"28b52728035c5d25d2b0b30ed1c645358a2ff53a595948e20636cbb2f311bd22\" returns successfully" Jan 29 12:17:12.523067 containerd[1532]: time="2025-01-29T12:17:12.522954017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6bb945f-smcsx,Uid:b351bd63-5769-4d05-9e8c-30ffedd0fa67,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8\"" Jan 29 12:17:12.589918 systemd-networkd[1229]: cali35e9493b223: Gained IPv6LL Jan 29 12:17:12.873368 containerd[1532]: time="2025-01-29T12:17:12.873248875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:12.875232 containerd[1532]: time="2025-01-29T12:17:12.874516635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 12:17:12.875806 containerd[1532]: time="2025-01-29T12:17:12.875417436Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:12.878778 containerd[1532]: time="2025-01-29T12:17:12.878098638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:12.879020 containerd[1532]: time="2025-01-29T12:17:12.878905438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.813181607s" Jan 29 12:17:12.879020 containerd[1532]: time="2025-01-29T12:17:12.878944758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:17:12.880874 containerd[1532]: time="2025-01-29T12:17:12.880824199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:17:12.885886 containerd[1532]: time="2025-01-29T12:17:12.885825122Z" level=info msg="CreateContainer within sandbox \"ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:17:12.898091 containerd[1532]: time="2025-01-29T12:17:12.898047490Z" level=info msg="CreateContainer within sandbox \"ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"57e1efabb6ae0d12b1ce72edc77b3a406305d3f6cfe6acfdfc2f0ecded3dfc0b\"" Jan 29 12:17:12.898914 containerd[1532]: time="2025-01-29T12:17:12.898681730Z" level=info msg="StartContainer for \"57e1efabb6ae0d12b1ce72edc77b3a406305d3f6cfe6acfdfc2f0ecded3dfc0b\"" Jan 29 12:17:13.043500 containerd[1532]: time="2025-01-29T12:17:13.041912858Z" level=info msg="StartContainer for \"57e1efabb6ae0d12b1ce72edc77b3a406305d3f6cfe6acfdfc2f0ecded3dfc0b\" returns successfully" Jan 29 12:17:13.207414 kubelet[2700]: E0129 12:17:13.206180 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:13.207414 kubelet[2700]: E0129 12:17:13.206765 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:13.232891 kubelet[2700]: I0129 12:17:13.232834 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc6bb945f-c7sf8" podStartSLOduration=22.798362365 podStartE2EDuration="25.232818169s" podCreationTimestamp="2025-01-29 12:16:48 +0000 UTC" firstStartedPulling="2025-01-29 12:17:10.446044195 +0000 UTC m=+42.523255641" lastFinishedPulling="2025-01-29 12:17:12.880499999 +0000 UTC m=+44.957711445" observedRunningTime="2025-01-29 12:17:13.231553049 +0000 UTC m=+45.308764495" watchObservedRunningTime="2025-01-29 12:17:13.232818169 +0000 UTC m=+45.310029575" Jan 29 12:17:13.250812 kubelet[2700]: I0129 12:17:13.249232 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-td5sx" podStartSLOduration=30.249212139 podStartE2EDuration="30.249212139s" podCreationTimestamp="2025-01-29 12:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:17:13.248058218 +0000 UTC m=+45.325269664" watchObservedRunningTime="2025-01-29 12:17:13.249212139 +0000 UTC m=+45.326423545" Jan 29 12:17:13.421910 systemd-networkd[1229]: calia633430396a: Gained IPv6LL Jan 29 12:17:14.209533 kubelet[2700]: E0129 12:17:14.209505 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:14.317941 systemd-networkd[1229]: cali00ce617bff6: Gained IPv6LL Jan 29 12:17:14.457608 containerd[1532]: time="2025-01-29T12:17:14.457562587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:14.458455 containerd[1532]: time="2025-01-29T12:17:14.458084148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 12:17:14.459933 containerd[1532]: time="2025-01-29T12:17:14.459633749Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:14.463807 containerd[1532]: time="2025-01-29T12:17:14.463661271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.582799832s" Jan 29 12:17:14.463807 containerd[1532]: time="2025-01-29T12:17:14.463704151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 12:17:14.464517 containerd[1532]: time="2025-01-29T12:17:14.464487391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:14.465107 containerd[1532]: time="2025-01-29T12:17:14.465072631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:17:14.475799 containerd[1532]: time="2025-01-29T12:17:14.474439877Z" level=info msg="CreateContainer within sandbox \"64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:17:14.485287 containerd[1532]: time="2025-01-29T12:17:14.485248203Z" level=info msg="CreateContainer within sandbox \"64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f09373416227aa9798b3287d288ed4df3e1dcd6fdf4dfd46e16a9a5a486dcb20\"" Jan 29 12:17:14.485783 containerd[1532]: time="2025-01-29T12:17:14.485746483Z" level=info msg="StartContainer for \"f09373416227aa9798b3287d288ed4df3e1dcd6fdf4dfd46e16a9a5a486dcb20\"" Jan 29 12:17:14.540555 containerd[1532]: time="2025-01-29T12:17:14.540511113Z" level=info msg="StartContainer for \"f09373416227aa9798b3287d288ed4df3e1dcd6fdf4dfd46e16a9a5a486dcb20\" returns successfully" Jan 29 12:17:14.712402 containerd[1532]: time="2025-01-29T12:17:14.712295167Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:17:14.713031 containerd[1532]: time="2025-01-29T12:17:14.712907927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:17:14.715104 containerd[1532]: time="2025-01-29T12:17:14.715065088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 249.954096ms" Jan 29 12:17:14.715104 containerd[1532]: time="2025-01-29T12:17:14.715101688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:17:14.717388 containerd[1532]: time="2025-01-29T12:17:14.717353769Z" level=info msg="CreateContainer within sandbox \"5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:17:14.738840 containerd[1532]: time="2025-01-29T12:17:14.738796341Z" level=info msg="CreateContainer within sandbox \"5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"65fed1d13f1254fdf7310c049d9e74c6e63d53e937b10447b1dace27a8d60311\"" Jan 29 12:17:14.739249 containerd[1532]: time="2025-01-29T12:17:14.739219901Z" level=info msg="StartContainer for \"65fed1d13f1254fdf7310c049d9e74c6e63d53e937b10447b1dace27a8d60311\"" Jan 29 12:17:14.789665 containerd[1532]: time="2025-01-29T12:17:14.789628369Z" level=info msg="StartContainer for \"65fed1d13f1254fdf7310c049d9e74c6e63d53e937b10447b1dace27a8d60311\" returns successfully" Jan 29 12:17:15.217171 kubelet[2700]: E0129 12:17:15.217142 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:15.255264 kubelet[2700]: I0129 12:17:15.255099 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc6bb945f-smcsx" podStartSLOduration=25.063919384 podStartE2EDuration="27.255080855s" podCreationTimestamp="2025-01-29 12:16:48 +0000 UTC" firstStartedPulling="2025-01-29 12:17:12.524538818 +0000 UTC m=+44.601750264" lastFinishedPulling="2025-01-29 12:17:14.715700289 +0000 UTC m=+46.792911735" observedRunningTime="2025-01-29 12:17:15.253062494 +0000 UTC m=+47.330273940" watchObservedRunningTime="2025-01-29 12:17:15.255080855 +0000 UTC m=+47.332292261" Jan 29 12:17:15.306781 kubelet[2700]: I0129 12:17:15.306713 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84bd7d8685-njpf7" podStartSLOduration=24.229075894 podStartE2EDuration="27.306696481s" podCreationTimestamp="2025-01-29 12:16:48 +0000 UTC" firstStartedPulling="2025-01-29 12:17:11.387137484 +0000 UTC m=+43.464348890" lastFinishedPulling="2025-01-29 12:17:14.464758031 +0000 UTC m=+46.541969477" observedRunningTime="2025-01-29 12:17:15.271602103 +0000 UTC m=+47.348813549" watchObservedRunningTime="2025-01-29 12:17:15.306696481 +0000 UTC m=+47.383907927" Jan 29 12:17:15.689023 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:48688.service - OpenSSH per-connection server daemon (10.0.0.1:48688). Jan 29 12:17:15.734736 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 48688 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:15.736138 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:15.749235 systemd-logind[1510]: New session 13 of user core. Jan 29 12:17:15.756096 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:17:15.954408 sshd[5242]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:15.958352 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:48688.service: Deactivated successfully. Jan 29 12:17:15.960457 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:17:15.961346 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:17:15.964514 systemd-logind[1510]: Removed session 13. Jan 29 12:17:20.970077 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:48694.service - OpenSSH per-connection server daemon (10.0.0.1:48694). Jan 29 12:17:21.009941 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 48694 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:21.011427 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:21.015765 systemd-logind[1510]: New session 14 of user core. Jan 29 12:17:21.026153 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:17:21.173586 sshd[5264]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:21.182821 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:48694.service: Deactivated successfully. Jan 29 12:17:21.184816 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:17:21.184882 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:17:21.185952 systemd-logind[1510]: Removed session 14. Jan 29 12:17:21.347152 kubelet[2700]: E0129 12:17:21.347036 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:17:26.195058 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:44076.service - OpenSSH per-connection server daemon (10.0.0.1:44076). Jan 29 12:17:26.236815 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 44076 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:26.238353 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:26.242856 systemd-logind[1510]: New session 15 of user core. Jan 29 12:17:26.249051 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:17:26.395682 sshd[5329]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:26.398812 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:44076.service: Deactivated successfully. Jan 29 12:17:26.400690 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:17:26.400761 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:17:26.401529 systemd-logind[1510]: Removed session 15. Jan 29 12:17:27.980891 containerd[1532]: time="2025-01-29T12:17:27.980858539Z" level=info msg="StopPodSandbox for \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\"" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.016 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a416aba-2cd9-4f5e-bf85-751955865be7", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b", Pod:"calico-apiserver-fc6bb945f-c7sf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b80e296b3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.016 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.016 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" iface="eth0" netns="" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.016 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.016 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.036 [INFO][5370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.036 [INFO][5370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.036 [INFO][5370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.044 [WARNING][5370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.044 [INFO][5370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.045 [INFO][5370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.050535 containerd[1532]: 2025-01-29 12:17:28.048 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.050535 containerd[1532]: time="2025-01-29T12:17:28.050399835Z" level=info msg="TearDown network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\" successfully" Jan 29 12:17:28.050535 containerd[1532]: time="2025-01-29T12:17:28.050432475Z" level=info msg="StopPodSandbox for \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\" returns successfully" Jan 29 12:17:28.051010 containerd[1532]: time="2025-01-29T12:17:28.050954355Z" level=info msg="RemovePodSandbox for \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\"" Jan 29 12:17:28.055904 containerd[1532]: time="2025-01-29T12:17:28.055862356Z" level=info msg="Forcibly stopping sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\"" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.093 [WARNING][5393] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a416aba-2cd9-4f5e-bf85-751955865be7", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddb3dd2d7dfac00c2c980e0ce1808d37f9afe2928d2e1a1152c2ecbe88cd215b", Pod:"calico-apiserver-fc6bb945f-c7sf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b80e296b3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.094 [INFO][5393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.094 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" iface="eth0" netns="" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.094 [INFO][5393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.094 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.113 [INFO][5400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.113 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.113 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.121 [WARNING][5400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.121 [INFO][5400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" HandleID="k8s-pod-network.d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Workload="localhost-k8s-calico--apiserver--fc6bb945f--c7sf8-eth0" Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.122 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.125859 containerd[1532]: 2025-01-29 12:17:28.124 [INFO][5393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e" Jan 29 12:17:28.126280 containerd[1532]: time="2025-01-29T12:17:28.125889972Z" level=info msg="TearDown network for sandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\" successfully" Jan 29 12:17:28.137370 containerd[1532]: time="2025-01-29T12:17:28.137315774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:17:28.137469 containerd[1532]: time="2025-01-29T12:17:28.137432574Z" level=info msg="RemovePodSandbox \"d4dc045f3f141d6a9955aaf6a1aa110b10511787f54675f99759b446ce73515e\" returns successfully" Jan 29 12:17:28.138060 containerd[1532]: time="2025-01-29T12:17:28.138027974Z" level=info msg="StopPodSandbox for \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\"" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.172 [WARNING][5423] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb1eaa67-48a3-4aae-837a-31a50fc03ba9", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639", Pod:"coredns-7db6d8ff4d-td5sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia633430396a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.172 [INFO][5423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.172 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" iface="eth0" netns="" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.172 [INFO][5423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.172 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.195 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.196 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.196 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.206 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.206 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.207 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.216245 containerd[1532]: 2025-01-29 12:17:28.210 [INFO][5423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.217567 containerd[1532]: time="2025-01-29T12:17:28.216387272Z" level=info msg="TearDown network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\" successfully" Jan 29 12:17:28.217567 containerd[1532]: time="2025-01-29T12:17:28.216907592Z" level=info msg="StopPodSandbox for \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\" returns successfully" Jan 29 12:17:28.217567 containerd[1532]: time="2025-01-29T12:17:28.217301552Z" level=info msg="RemovePodSandbox for \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\"" Jan 29 12:17:28.217567 containerd[1532]: time="2025-01-29T12:17:28.217328392Z" level=info msg="Forcibly stopping sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\"" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.255 [WARNING][5453] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb1eaa67-48a3-4aae-837a-31a50fc03ba9", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1a733517753fd8060a74f1863975c5da30de779db716c7fea4707b532589639", Pod:"coredns-7db6d8ff4d-td5sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia633430396a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.255 [INFO][5453] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.255 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" iface="eth0" netns="" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.255 [INFO][5453] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.255 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.274 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.274 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.274 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.283 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.283 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" HandleID="k8s-pod-network.3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Workload="localhost-k8s-coredns--7db6d8ff4d--td5sx-eth0" Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.284 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.288420 containerd[1532]: 2025-01-29 12:17:28.286 [INFO][5453] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67" Jan 29 12:17:28.288420 containerd[1532]: time="2025-01-29T12:17:28.288051728Z" level=info msg="TearDown network for sandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\" successfully" Jan 29 12:17:28.294801 containerd[1532]: time="2025-01-29T12:17:28.294623129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:17:28.294801 containerd[1532]: time="2025-01-29T12:17:28.294693569Z" level=info msg="RemovePodSandbox \"3be58aea7f4d5ed067fa4cb0cf0791420b7e10955732b60854a5e6c5c94e0a67\" returns successfully" Jan 29 12:17:28.295163 containerd[1532]: time="2025-01-29T12:17:28.295138249Z" level=info msg="StopPodSandbox for \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\"" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.329 [WARNING][5483] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tpz9c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a277c46-b25e-4ca8-b105-0086d4736c88", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4", Pod:"csi-node-driver-tpz9c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali801fc5ad47a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.329 [INFO][5483] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.329 [INFO][5483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" iface="eth0" netns="" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.329 [INFO][5483] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.329 [INFO][5483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.348 [INFO][5491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.348 [INFO][5491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.348 [INFO][5491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.356 [WARNING][5491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.356 [INFO][5491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.357 [INFO][5491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.361352 containerd[1532]: 2025-01-29 12:17:28.359 [INFO][5483] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.361352 containerd[1532]: time="2025-01-29T12:17:28.361234264Z" level=info msg="TearDown network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\" successfully" Jan 29 12:17:28.361352 containerd[1532]: time="2025-01-29T12:17:28.361259504Z" level=info msg="StopPodSandbox for \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\" returns successfully" Jan 29 12:17:28.362194 containerd[1532]: time="2025-01-29T12:17:28.362149464Z" level=info msg="RemovePodSandbox for \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\"" Jan 29 12:17:28.362248 containerd[1532]: time="2025-01-29T12:17:28.362198544Z" level=info msg="Forcibly stopping sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\"" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.398 [WARNING][5514] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tpz9c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a277c46-b25e-4ca8-b105-0086d4736c88", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9072ac46374f5e1f94056bdfbdf6890fe49185ba1ddf7c8b717c540a7ecb17d4", Pod:"csi-node-driver-tpz9c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali801fc5ad47a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.399 [INFO][5514] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.399 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" iface="eth0" netns="" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.399 [INFO][5514] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.399 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.418 [INFO][5522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.418 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.418 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.426 [WARNING][5522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.426 [INFO][5522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" HandleID="k8s-pod-network.a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Workload="localhost-k8s-csi--node--driver--tpz9c-eth0" Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.428 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.432836 containerd[1532]: 2025-01-29 12:17:28.430 [INFO][5514] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325" Jan 29 12:17:28.433315 containerd[1532]: time="2025-01-29T12:17:28.432867920Z" level=info msg="TearDown network for sandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\" successfully" Jan 29 12:17:28.436059 containerd[1532]: time="2025-01-29T12:17:28.436016960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:17:28.436135 containerd[1532]: time="2025-01-29T12:17:28.436094600Z" level=info msg="RemovePodSandbox \"a63c57c9638d531e0c1db30bb7d30bc40ecef4e4854ff466c96651afdaca3325\" returns successfully" Jan 29 12:17:28.436866 containerd[1532]: time="2025-01-29T12:17:28.436565121Z" level=info msg="StopPodSandbox for \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\"" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.471 [WARNING][5546] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae1f8eed-5d91-4993-93c9-eecda1e1b81f", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796", Pod:"coredns-7db6d8ff4d-b4d6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5765cb2417c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.471 [INFO][5546] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.471 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" iface="eth0" netns="" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.471 [INFO][5546] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.471 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.489 [INFO][5553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.490 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.490 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.497 [WARNING][5553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.497 [INFO][5553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.499 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.502568 containerd[1532]: 2025-01-29 12:17:28.500 [INFO][5546] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.503184 containerd[1532]: time="2025-01-29T12:17:28.503059535Z" level=info msg="TearDown network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\" successfully" Jan 29 12:17:28.503184 containerd[1532]: time="2025-01-29T12:17:28.503090015Z" level=info msg="StopPodSandbox for \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\" returns successfully" Jan 29 12:17:28.503675 containerd[1532]: time="2025-01-29T12:17:28.503648295Z" level=info msg="RemovePodSandbox for \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\"" Jan 29 12:17:28.503731 containerd[1532]: time="2025-01-29T12:17:28.503685215Z" level=info msg="Forcibly stopping sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\"" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.540 [WARNING][5575] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ae1f8eed-5d91-4993-93c9-eecda1e1b81f", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b323015bca216cfac0f037bb6a430771f033a75618d8d1de07be19b216124796", Pod:"coredns-7db6d8ff4d-b4d6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5765cb2417c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.540 [INFO][5575] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.540 [INFO][5575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" iface="eth0" netns="" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.540 [INFO][5575] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.540 [INFO][5575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.559 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.559 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.559 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.568 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.568 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" HandleID="k8s-pod-network.afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Workload="localhost-k8s-coredns--7db6d8ff4d--b4d6f-eth0" Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.570 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.574210 containerd[1532]: 2025-01-29 12:17:28.572 [INFO][5575] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a" Jan 29 12:17:28.574210 containerd[1532]: time="2025-01-29T12:17:28.574183671Z" level=info msg="TearDown network for sandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\" successfully" Jan 29 12:17:28.577436 containerd[1532]: time="2025-01-29T12:17:28.577385352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:17:28.577522 containerd[1532]: time="2025-01-29T12:17:28.577450112Z" level=info msg="RemovePodSandbox \"afe493f052788e03066675c39be4e950dbb8836d0d2b289e19677f28ef06941a\" returns successfully" Jan 29 12:17:28.577957 containerd[1532]: time="2025-01-29T12:17:28.577932192Z" level=info msg="StopPodSandbox for \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\"" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.613 [WARNING][5605] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0", GenerateName:"calico-kube-controllers-84bd7d8685-", Namespace:"calico-system", SelfLink:"", UID:"ae084314-8f16-437b-b454-2e1d43ea7c97", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bd7d8685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2", Pod:"calico-kube-controllers-84bd7d8685-njpf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35e9493b223", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.614 [INFO][5605] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.614 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" iface="eth0" netns="" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.614 [INFO][5605] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.614 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.632 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.632 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.632 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.640 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.640 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.641 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.645055 containerd[1532]: 2025-01-29 12:17:28.643 [INFO][5605] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.645845 containerd[1532]: time="2025-01-29T12:17:28.645516327Z" level=info msg="TearDown network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\" successfully" Jan 29 12:17:28.645845 containerd[1532]: time="2025-01-29T12:17:28.645548607Z" level=info msg="StopPodSandbox for \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\" returns successfully" Jan 29 12:17:28.646180 containerd[1532]: time="2025-01-29T12:17:28.646152487Z" level=info msg="RemovePodSandbox for \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\"" Jan 29 12:17:28.646236 containerd[1532]: time="2025-01-29T12:17:28.646190967Z" level=info msg="Forcibly stopping sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\"" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.679 [WARNING][5636] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0", GenerateName:"calico-kube-controllers-84bd7d8685-", Namespace:"calico-system", SelfLink:"", UID:"ae084314-8f16-437b-b454-2e1d43ea7c97", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bd7d8685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64987a6a6f0d6f0233bc1198aaee7b1a1abde6af2a118e08254eda58a5fd77c2", Pod:"calico-kube-controllers-84bd7d8685-njpf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35e9493b223", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.679 [INFO][5636] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.679 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" iface="eth0" netns="" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.679 [INFO][5636] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.679 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.699 [INFO][5644] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.699 [INFO][5644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.699 [INFO][5644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.707 [WARNING][5644] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.707 [INFO][5644] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" HandleID="k8s-pod-network.8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Workload="localhost-k8s-calico--kube--controllers--84bd7d8685--njpf7-eth0" Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.708 [INFO][5644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.712433 containerd[1532]: 2025-01-29 12:17:28.710 [INFO][5636] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b" Jan 29 12:17:28.712433 containerd[1532]: time="2025-01-29T12:17:28.712364342Z" level=info msg="TearDown network for sandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\" successfully" Jan 29 12:17:28.714972 containerd[1532]: time="2025-01-29T12:17:28.714923262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:17:28.715037 containerd[1532]: time="2025-01-29T12:17:28.714983542Z" level=info msg="RemovePodSandbox \"8c7d6f14657a3babe3bfe3f06b443113c2c2eefeb73d2f5515ba979492bdc15b\" returns successfully" Jan 29 12:17:28.715395 containerd[1532]: time="2025-01-29T12:17:28.715375822Z" level=info msg="StopPodSandbox for \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\"" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.750 [WARNING][5666] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b351bd63-5769-4d05-9e8c-30ffedd0fa67", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8", Pod:"calico-apiserver-fc6bb945f-smcsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00ce617bff6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.751 [INFO][5666] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.751 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" iface="eth0" netns="" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.751 [INFO][5666] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.751 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.771 [INFO][5674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.772 [INFO][5674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.772 [INFO][5674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.780 [WARNING][5674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.780 [INFO][5674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.781 [INFO][5674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.784954 containerd[1532]: 2025-01-29 12:17:28.783 [INFO][5666] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.785372 containerd[1532]: time="2025-01-29T12:17:28.784988678Z" level=info msg="TearDown network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\" successfully" Jan 29 12:17:28.785372 containerd[1532]: time="2025-01-29T12:17:28.785012678Z" level=info msg="StopPodSandbox for \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\" returns successfully" Jan 29 12:17:28.785908 containerd[1532]: time="2025-01-29T12:17:28.785861918Z" level=info msg="RemovePodSandbox for \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\"" Jan 29 12:17:28.785972 containerd[1532]: time="2025-01-29T12:17:28.785914638Z" level=info msg="Forcibly stopping sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\"" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.820 [WARNING][5696] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0", GenerateName:"calico-apiserver-fc6bb945f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b351bd63-5769-4d05-9e8c-30ffedd0fa67", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6bb945f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d98f18205ab7e5e4e3fb7c0477f1cff0de7ea416ed08453aeb24e1e6b1baea8", Pod:"calico-apiserver-fc6bb945f-smcsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00ce617bff6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.820 [INFO][5696] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.820 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" iface="eth0" netns="" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.820 [INFO][5696] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.820 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.840 [INFO][5703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.840 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.840 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.847 [WARNING][5703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.847 [INFO][5703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" HandleID="k8s-pod-network.78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Workload="localhost-k8s-calico--apiserver--fc6bb945f--smcsx-eth0" Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.849 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:17:28.853859 containerd[1532]: 2025-01-29 12:17:28.850 [INFO][5696] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9" Jan 29 12:17:28.853859 containerd[1532]: time="2025-01-29T12:17:28.852730653Z" level=info msg="TearDown network for sandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\" successfully" Jan 29 12:17:28.857751 containerd[1532]: time="2025-01-29T12:17:28.857713294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:17:28.857895 containerd[1532]: time="2025-01-29T12:17:28.857863214Z" level=info msg="RemovePodSandbox \"78ba7ae506e717ec8e99737015a946635543f2c7fba969d0c7d64c7bdefb6ce9\" returns successfully" Jan 29 12:17:31.412128 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:44080.service - OpenSSH per-connection server daemon (10.0.0.1:44080). Jan 29 12:17:31.452707 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 44080 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:31.454297 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:31.458171 systemd-logind[1510]: New session 16 of user core. Jan 29 12:17:31.470032 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:17:31.623971 sshd[5710]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:31.634054 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:44090.service - OpenSSH per-connection server daemon (10.0.0.1:44090). Jan 29 12:17:31.634429 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:44080.service: Deactivated successfully. Jan 29 12:17:31.637181 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:17:31.637808 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:17:31.639066 systemd-logind[1510]: Removed session 16. Jan 29 12:17:31.668413 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 44090 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:31.670282 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:31.675410 systemd-logind[1510]: New session 17 of user core. Jan 29 12:17:31.684120 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:17:31.885901 sshd[5723]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:31.892015 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:44102.service - OpenSSH per-connection server daemon (10.0.0.1:44102). Jan 29 12:17:31.892398 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:44090.service: Deactivated successfully. Jan 29 12:17:31.895639 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:17:31.896169 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:17:31.898216 systemd-logind[1510]: Removed session 17. Jan 29 12:17:31.929642 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 44102 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:31.931259 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:31.935139 systemd-logind[1510]: New session 18 of user core. Jan 29 12:17:31.946079 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:17:33.490434 sshd[5737]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:33.498063 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:53064.service - OpenSSH per-connection server daemon (10.0.0.1:53064). Jan 29 12:17:33.498425 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:44102.service: Deactivated successfully. Jan 29 12:17:33.504536 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:17:33.512037 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:17:33.518629 systemd-logind[1510]: Removed session 18. Jan 29 12:17:33.547912 sshd[5759]: Accepted publickey for core from 10.0.0.1 port 53064 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:33.549229 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:33.554935 systemd-logind[1510]: New session 19 of user core. Jan 29 12:17:33.567038 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:17:33.844494 sshd[5759]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:33.859971 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:53072.service - OpenSSH per-connection server daemon (10.0.0.1:53072). Jan 29 12:17:33.860442 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:53064.service: Deactivated successfully. Jan 29 12:17:33.862127 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:17:33.865626 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:17:33.870602 systemd-logind[1510]: Removed session 19. Jan 29 12:17:33.901257 sshd[5777]: Accepted publickey for core from 10.0.0.1 port 53072 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:33.903028 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:33.912877 systemd-logind[1510]: New session 20 of user core. Jan 29 12:17:33.920122 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:17:34.087936 sshd[5777]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:34.092389 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:53072.service: Deactivated successfully. Jan 29 12:17:34.093652 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:17:34.094204 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:17:34.099007 systemd-logind[1510]: Removed session 20. Jan 29 12:17:39.097058 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:53086.service - OpenSSH per-connection server daemon (10.0.0.1:53086). Jan 29 12:17:39.134730 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 53086 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:39.136198 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:39.139878 systemd-logind[1510]: New session 21 of user core. Jan 29 12:17:39.149046 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:17:39.292020 sshd[5815]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:39.295246 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:53086.service: Deactivated successfully. Jan 29 12:17:39.297212 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:17:39.297288 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:17:39.298237 systemd-logind[1510]: Removed session 21. Jan 29 12:17:44.303014 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:40778.service - OpenSSH per-connection server daemon (10.0.0.1:40778). Jan 29 12:17:44.337388 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 40778 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:44.338584 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:44.342769 systemd-logind[1510]: New session 22 of user core. Jan 29 12:17:44.356173 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:17:44.480188 sshd[5838]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:44.483404 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:40778.service: Deactivated successfully. Jan 29 12:17:44.485368 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:17:44.485428 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:17:44.486843 systemd-logind[1510]: Removed session 22. Jan 29 12:17:49.493022 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:40782.service - OpenSSH per-connection server daemon (10.0.0.1:40782). Jan 29 12:17:49.526658 sshd[5854]: Accepted publickey for core from 10.0.0.1 port 40782 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:17:49.527959 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:17:49.531432 systemd-logind[1510]: New session 23 of user core. Jan 29 12:17:49.538027 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:17:49.672062 sshd[5854]: pam_unix(sshd:session): session closed for user core Jan 29 12:17:49.675227 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:40782.service: Deactivated successfully. Jan 29 12:17:49.677123 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:17:49.677206 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:17:49.678205 systemd-logind[1510]: Removed session 23.