Jan 29 11:58:47.899091 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:58:47.899111 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 11:58:47.899121 kernel: KASLR enabled Jan 29 11:58:47.899127 kernel: efi: EFI v2.7 by EDK II Jan 29 11:58:47.899133 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 29 11:58:47.899138 kernel: random: crng init done Jan 29 11:58:47.899146 kernel: ACPI: Early table checksum verification disabled Jan 29 11:58:47.899152 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 29 11:58:47.899170 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:58:47.899178 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899185 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899191 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899197 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899203 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899211 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899218 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899225 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899231 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:47.899238 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:58:47.899244 kernel: NUMA: Failed to initialise from firmware Jan 29 11:58:47.899251 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:58:47.899258 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 11:58:47.899264 kernel: Zone ranges: Jan 29 11:58:47.899270 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:58:47.899276 kernel: DMA32 empty Jan 29 11:58:47.899284 kernel: Normal empty Jan 29 11:58:47.899298 kernel: Movable zone start for each node Jan 29 11:58:47.899305 kernel: Early memory node ranges Jan 29 11:58:47.899311 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:58:47.899318 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:58:47.899324 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:58:47.899331 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:58:47.899337 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:58:47.899343 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:58:47.899349 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:58:47.899356 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:58:47.899362 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:58:47.899370 kernel: psci: probing for conduit method from ACPI. Jan 29 11:58:47.899377 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:58:47.899383 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:58:47.899392 kernel: psci: Trusted OS migration not required Jan 29 11:58:47.899399 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:58:47.899406 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:58:47.899414 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:58:47.899420 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:58:47.899430 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:58:47.899437 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:58:47.899444 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:58:47.899450 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:58:47.899457 kernel: CPU features: detected: Spectre-v4 Jan 29 11:58:47.899463 kernel: CPU features: detected: Spectre-BHB Jan 29 11:58:47.899470 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:58:47.899477 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:58:47.899486 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:58:47.899492 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:58:47.899499 kernel: alternatives: applying boot alternatives Jan 29 11:58:47.899507 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:47.899514 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:58:47.899521 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:58:47.899528 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:58:47.899534 kernel: Fallback order for Node 0: 0 Jan 29 11:58:47.899541 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:58:47.899548 kernel: Policy zone: DMA Jan 29 11:58:47.899618 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:58:47.899630 kernel: software IO TLB: area num 4. Jan 29 11:58:47.899637 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:58:47.899645 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 29 11:58:47.899651 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:58:47.899658 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:58:47.899666 kernel: rcu: RCU event tracing is enabled. Jan 29 11:58:47.899672 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:58:47.899679 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:58:47.899686 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:58:47.899693 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:58:47.899700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:58:47.899706 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:58:47.899715 kernel: GICv3: 256 SPIs implemented Jan 29 11:58:47.899721 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:58:47.899728 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:58:47.899735 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:58:47.899742 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:58:47.899748 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:58:47.899755 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:58:47.899762 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:58:47.899769 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:58:47.899776 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:58:47.899782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:58:47.899790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:47.899797 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:58:47.899804 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:58:47.899811 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:58:47.899818 kernel: arm-pv: using stolen time PV Jan 29 11:58:47.899825 kernel: Console: colour dummy device 80x25 Jan 29 11:58:47.899832 kernel: ACPI: Core revision 20230628 Jan 29 11:58:47.899839 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:58:47.899846 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:58:47.899853 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:58:47.899861 kernel: landlock: Up and running. Jan 29 11:58:47.899868 kernel: SELinux: Initializing. Jan 29 11:58:47.899875 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:47.899882 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:47.899889 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:58:47.899896 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:58:47.899902 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:58:47.899910 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:58:47.899916 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:58:47.899925 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:58:47.899931 kernel: Remapping and enabling EFI services. Jan 29 11:58:47.899938 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:58:47.899945 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:58:47.899952 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:58:47.899959 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:58:47.899966 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:47.899973 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:58:47.899980 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:58:47.899987 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:58:47.899995 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:58:47.900003 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:47.900014 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:58:47.900022 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:58:47.900029 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:58:47.900037 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:58:47.900044 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:47.900051 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:58:47.900059 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:58:47.900067 kernel: SMP: Total of 4 processors activated. Jan 29 11:58:47.900075 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:58:47.900082 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:58:47.900089 kernel: CPU features: detected: Common not Private translations Jan 29 11:58:47.900097 kernel: CPU features: detected: CRC32 instructions Jan 29 11:58:47.900104 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:58:47.900111 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:58:47.900118 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:58:47.900126 kernel: CPU features: detected: Privileged Access Never Jan 29 11:58:47.900134 kernel: CPU features: detected: RAS Extension Support Jan 29 11:58:47.900141 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:58:47.900148 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:58:47.900155 kernel: alternatives: applying system-wide alternatives Jan 29 11:58:47.900162 kernel: devtmpfs: initialized Jan 29 11:58:47.900170 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:58:47.900177 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:58:47.900184 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:58:47.900193 kernel: SMBIOS 3.0.0 present. Jan 29 11:58:47.900200 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 29 11:58:47.900207 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:58:47.900214 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:58:47.900222 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:58:47.900229 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:58:47.900236 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:58:47.900244 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 29 11:58:47.900251 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:58:47.900259 kernel: cpuidle: using governor menu Jan 29 11:58:47.900267 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:58:47.900274 kernel: ASID allocator initialised with 32768 entries Jan 29 11:58:47.900281 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:58:47.900292 kernel: Serial: AMBA PL011 UART driver Jan 29 11:58:47.900300 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:58:47.900307 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:58:47.900315 kernel: Modules: 509040 pages in range for PLT usage Jan 29 11:58:47.900322 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:58:47.900331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:58:47.900338 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:58:47.900345 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:58:47.900353 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:58:47.900360 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:58:47.900367 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:58:47.900374 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:58:47.900381 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:58:47.900389 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:58:47.900397 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:58:47.900404 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:58:47.900411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:58:47.900418 kernel: ACPI: Interpreter enabled Jan 29 11:58:47.900426 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:58:47.900433 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:58:47.900440 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:58:47.900447 kernel: printk: console [ttyAMA0] enabled Jan 29 11:58:47.900455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:58:47.900599 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:58:47.900679 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:58:47.900746 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:58:47.900810 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:58:47.900873 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:58:47.900883 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:58:47.900890 kernel: PCI host bridge to bus 0000:00 Jan 29 11:58:47.900960 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:58:47.901019 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:58:47.901076 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:58:47.901138 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:58:47.901216 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:58:47.901300 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:58:47.901374 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:58:47.901443 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:58:47.901511 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:58:47.901615 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:58:47.901685 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:58:47.901750 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:58:47.901810 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:58:47.901869 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:58:47.901933 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:58:47.901943 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:58:47.901950 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:58:47.901958 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:58:47.901965 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:58:47.901972 kernel: iommu: Default domain type: Translated Jan 29 11:58:47.901979 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:58:47.901987 kernel: efivars: Registered efivars operations Jan 29 11:58:47.901996 kernel: vgaarb: loaded Jan 29 11:58:47.902003 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:58:47.902010 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:58:47.902017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:58:47.902025 kernel: pnp: PnP ACPI init Jan 29 11:58:47.902096 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:58:47.902106 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:58:47.902113 kernel: NET: Registered PF_INET protocol family Jan 29 11:58:47.902122 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:58:47.902130 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:58:47.902137 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:58:47.902145 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:58:47.902152 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:58:47.902159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:58:47.902167 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:47.902174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:47.902181 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:58:47.902190 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:58:47.902197 kernel: kvm [1]: HYP mode not available Jan 29 11:58:47.902204 kernel: Initialise system trusted keyrings Jan 29 11:58:47.902211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:58:47.902218 kernel: Key type asymmetric registered Jan 29 11:58:47.902225 kernel: Asymmetric key parser 'x509' registered Jan 29 11:58:47.902233 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:58:47.902240 kernel: io scheduler mq-deadline registered Jan 29 11:58:47.902247 kernel: io scheduler kyber registered Jan 29 11:58:47.902256 kernel: io scheduler bfq registered Jan 29 11:58:47.902263 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:58:47.902270 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:58:47.902278 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:58:47.902352 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:58:47.902363 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:58:47.902370 kernel: thunder_xcv, ver 1.0 Jan 29 11:58:47.902377 kernel: thunder_bgx, ver 1.0 Jan 29 11:58:47.902384 kernel: nicpf, ver 1.0 Jan 29 11:58:47.902394 kernel: nicvf, ver 1.0 Jan 29 11:58:47.902470 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:58:47.902532 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:58:47 UTC (1738151927) Jan 29 11:58:47.902542 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:58:47.902560 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:58:47.902569 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:58:47.902579 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:58:47.902587 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:58:47.902599 kernel: Segment Routing with IPv6 Jan 29 11:58:47.902606 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:58:47.902613 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:58:47.902620 kernel: Key type dns_resolver registered Jan 29 11:58:47.902627 kernel: registered taskstats version 1 Jan 29 11:58:47.902635 kernel: Loading compiled-in X.509 certificates Jan 29 11:58:47.902642 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 11:58:47.902649 kernel: Key type .fscrypt registered Jan 29 11:58:47.902656 kernel: Key type fscrypt-provisioning registered Jan 29 11:58:47.902665 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:58:47.902673 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:58:47.902680 kernel: ima: No architecture policies found Jan 29 11:58:47.902687 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:58:47.902694 kernel: clk: Disabling unused clocks Jan 29 11:58:47.902701 kernel: Freeing unused kernel memory: 39360K Jan 29 11:58:47.902709 kernel: Run /init as init process Jan 29 11:58:47.902716 kernel: with arguments: Jan 29 11:58:47.902723 kernel: /init Jan 29 11:58:47.902731 kernel: with environment: Jan 29 11:58:47.902738 kernel: HOME=/ Jan 29 11:58:47.902745 kernel: TERM=linux Jan 29 11:58:47.902752 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:58:47.902761 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:47.902770 systemd[1]: Detected virtualization kvm. Jan 29 11:58:47.902778 systemd[1]: Detected architecture arm64. Jan 29 11:58:47.902786 systemd[1]: Running in initrd. Jan 29 11:58:47.902794 systemd[1]: No hostname configured, using default hostname. Jan 29 11:58:47.902802 systemd[1]: Hostname set to . Jan 29 11:58:47.902810 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:58:47.902818 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:58:47.902826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:47.902834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:47.902842 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:58:47.902850 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:47.902859 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:58:47.902867 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:58:47.902876 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:58:47.902884 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:58:47.902892 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:47.902900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:47.902908 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:47.902917 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:47.902924 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:47.902932 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:47.902940 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:47.902948 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:47.902956 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:58:47.902964 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:58:47.902971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:47.902980 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:47.902988 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:47.902996 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:47.903004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:58:47.903011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:47.903019 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:58:47.903027 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:58:47.903035 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:47.903042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:47.903052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:47.903059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:47.903067 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:47.903075 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:58:47.903100 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 11:58:47.903120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:58:47.903129 systemd-journald[237]: Journal started Jan 29 11:58:47.903148 systemd-journald[237]: Runtime Journal (/run/log/journal/c51021e3670d4dd797dde8b0ae1d9e2c) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:58:47.894434 systemd-modules-load[239]: Inserted module 'overlay' Jan 29 11:58:47.905780 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:47.906638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:58:47.910669 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:58:47.911196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:47.914986 kernel: Bridge firewalling registered Jan 29 11:58:47.913902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:47.914002 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 29 11:58:47.915238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:47.919472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:47.921204 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:47.935710 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:47.937402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:47.938955 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:47.946891 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:47.957678 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:47.958769 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:47.961342 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:58:47.973739 dracut-cmdline[280]: dracut-dracut-053 Jan 29 11:58:47.976154 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:47.987650 systemd-resolved[276]: Positive Trust Anchors: Jan 29 11:58:47.987667 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:47.987699 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:47.992298 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 29 11:58:47.993191 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:47.996533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:48.040581 kernel: SCSI subsystem initialized Jan 29 11:58:48.044578 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:58:48.052588 kernel: iscsi: registered transport (tcp) Jan 29 11:58:48.065579 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:58:48.065601 kernel: QLogic iSCSI HBA Driver Jan 29 11:58:48.106963 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:48.116665 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:58:48.135805 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:58:48.137472 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:58:48.137498 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:58:48.182592 kernel: raid6: neonx8 gen() 15754 MB/s Jan 29 11:58:48.199591 kernel: raid6: neonx4 gen() 15609 MB/s Jan 29 11:58:48.216597 kernel: raid6: neonx2 gen() 13255 MB/s Jan 29 11:58:48.233586 kernel: raid6: neonx1 gen() 10450 MB/s Jan 29 11:58:48.250586 kernel: raid6: int64x8 gen() 6940 MB/s Jan 29 11:58:48.267586 kernel: raid6: int64x4 gen() 7333 MB/s Jan 29 11:58:48.284586 kernel: raid6: int64x2 gen() 6114 MB/s Jan 29 11:58:48.301784 kernel: raid6: int64x1 gen() 5040 MB/s Jan 29 11:58:48.301799 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Jan 29 11:58:48.319639 kernel: raid6: .... xor() 11901 MB/s, rmw enabled Jan 29 11:58:48.319671 kernel: raid6: using neon recovery algorithm Jan 29 11:58:48.324949 kernel: xor: measuring software checksum speed Jan 29 11:58:48.324970 kernel: 8regs : 19030 MB/sec Jan 29 11:58:48.325620 kernel: 32regs : 19631 MB/sec Jan 29 11:58:48.326847 kernel: arm64_neon : 26874 MB/sec Jan 29 11:58:48.326879 kernel: xor: using function: arm64_neon (26874 MB/sec) Jan 29 11:58:48.376591 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:58:48.386547 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:48.401694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:48.412849 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 29 11:58:48.415965 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:48.418565 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:58:48.433584 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 29 11:58:48.458531 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:48.472697 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:48.509804 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:48.516726 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:58:48.527628 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:48.529339 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:48.531622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:48.534212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:48.541860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:58:48.553751 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:48.557571 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:58:48.566987 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:58:48.567091 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:58:48.567102 kernel: GPT:9289727 != 19775487 Jan 29 11:58:48.567111 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:58:48.567120 kernel: GPT:9289727 != 19775487 Jan 29 11:58:48.567128 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:58:48.567137 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:58:48.575027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:48.575140 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:48.581005 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (505) Jan 29 11:58:48.578702 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:48.584620 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (510) Jan 29 11:58:48.584727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:48.584882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:48.587065 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:48.594830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:48.602195 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:58:48.607580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:48.615624 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:58:48.620213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:58:48.624029 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:58:48.625221 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:58:48.638683 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:58:48.640367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:48.645460 disk-uuid[550]: Primary Header is updated. Jan 29 11:58:48.645460 disk-uuid[550]: Secondary Entries is updated. Jan 29 11:58:48.645460 disk-uuid[550]: Secondary Header is updated. Jan 29 11:58:48.649575 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:58:48.661680 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:49.660581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:58:49.660636 disk-uuid[553]: The operation has completed successfully. Jan 29 11:58:49.679992 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:58:49.680088 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:58:49.701716 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:58:49.704425 sh[575]: Success Jan 29 11:58:49.719597 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:58:49.746069 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:58:49.756843 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:58:49.758254 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:58:49.768766 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 11:58:49.768803 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:49.768813 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:58:49.770638 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:58:49.770653 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:58:49.774507 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:58:49.775846 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:58:49.788704 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:58:49.790340 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:58:49.798220 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:49.798275 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:49.798287 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:58:49.801573 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:58:49.808488 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:58:49.810329 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:49.815798 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:58:49.822807 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:58:49.883813 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:49.893730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:49.915229 systemd-networkd[766]: lo: Link UP Jan 29 11:58:49.915249 systemd-networkd[766]: lo: Gained carrier Jan 29 11:58:49.916182 systemd-networkd[766]: Enumeration completed Jan 29 11:58:49.916284 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:49.916771 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:49.916774 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:49.917678 systemd[1]: Reached target network.target - Network. Jan 29 11:58:49.923053 ignition[668]: Ignition 2.19.0 Jan 29 11:58:49.917972 systemd-networkd[766]: eth0: Link UP Jan 29 11:58:49.923059 ignition[668]: Stage: fetch-offline Jan 29 11:58:49.917976 systemd-networkd[766]: eth0: Gained carrier Jan 29 11:58:49.923092 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:49.917982 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:49.923101 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:58:49.923271 ignition[668]: parsed url from cmdline: "" Jan 29 11:58:49.923274 ignition[668]: no config URL provided Jan 29 11:58:49.923279 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:49.923286 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:49.923308 ignition[668]: op(1): [started] loading QEMU firmware config module Jan 29 11:58:49.923313 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:58:49.930212 ignition[668]: op(1): [finished] loading QEMU firmware config module Jan 29 11:58:49.941609 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:58:49.971968 ignition[668]: parsing config with SHA512: af37974cecfe84098b3ea565b47783937c171aeaab60da4a62413d192962030b3696da83b16b1548f2c169ed2b1aae8d57bd08ee72cd286f064509338b1ec962 Jan 29 11:58:49.975931 unknown[668]: fetched base config from "system" Jan 29 11:58:49.975941 unknown[668]: fetched user config from "qemu" Jan 29 11:58:49.977472 ignition[668]: fetch-offline: fetch-offline passed Jan 29 11:58:49.977577 ignition[668]: Ignition finished successfully Jan 29 11:58:49.978918 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:49.980431 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:58:49.987697 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:58:49.997709 ignition[773]: Ignition 2.19.0 Jan 29 11:58:49.997718 ignition[773]: Stage: kargs Jan 29 11:58:49.997882 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:49.997891 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:58:50.001038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:58:49.998733 ignition[773]: kargs: kargs passed Jan 29 11:58:49.998774 ignition[773]: Ignition finished successfully Jan 29 11:58:50.004725 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:58:50.016270 ignition[781]: Ignition 2.19.0 Jan 29 11:58:50.016281 ignition[781]: Stage: disks Jan 29 11:58:50.016440 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:50.016454 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:58:50.018875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:58:50.017322 ignition[781]: disks: disks passed Jan 29 11:58:50.017362 ignition[781]: Ignition finished successfully Jan 29 11:58:50.021544 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:50.022922 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:58:50.024645 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:50.026422 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:50.028368 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:50.040686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:58:50.049678 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:58:50.053337 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:58:50.055343 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:58:50.103565 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 11:58:50.104213 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:58:50.105425 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:50.120630 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:50.123529 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:58:50.124607 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:58:50.124647 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:58:50.131876 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Jan 29 11:58:50.131898 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:50.124669 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:50.136420 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:50.136442 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:58:50.128981 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:58:50.130623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:58:50.139384 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:58:50.140607 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:50.179795 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:58:50.183852 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:58:50.187563 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:58:50.191546 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:58:50.256185 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:50.272673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:58:50.274747 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:58:50.278585 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:50.295087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:58:50.299218 ignition[914]: INFO : Ignition 2.19.0 Jan 29 11:58:50.299218 ignition[914]: INFO : Stage: mount Jan 29 11:58:50.300741 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:50.300741 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:58:50.300741 ignition[914]: INFO : mount: mount passed Jan 29 11:58:50.300741 ignition[914]: INFO : Ignition finished successfully Jan 29 11:58:50.301950 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:58:50.314666 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:58:50.767781 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:58:50.776761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:50.783493 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jan 29 11:58:50.783521 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:50.783532 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:50.785220 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:58:50.787567 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:58:50.788363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:50.805303 ignition[944]: INFO : Ignition 2.19.0 Jan 29 11:58:50.805303 ignition[944]: INFO : Stage: files Jan 29 11:58:50.805303 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:50.805303 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:58:50.805303 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:58:50.810212 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:58:50.810212 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:58:50.810212 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:58:50.810212 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:58:50.810212 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:58:50.810212 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:58:50.810212 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:58:50.808892 unknown[944]: wrote ssh authorized keys file for user: core Jan 29 11:58:50.858237 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:58:51.402754 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:58:51.402754 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:58:51.406453 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 11:58:51.757763 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:58:51.817873 systemd-networkd[766]: eth0: Gained IPv6LL Jan 29 11:58:52.279517 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:58:52.279517 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:58:52.283230 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:58:52.303235 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:58:52.306821 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:58:52.309321 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:58:52.309321 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:52.309321 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:52.309321 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:52.309321 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:52.309321 ignition[944]: INFO : files: files passed Jan 29 11:58:52.309321 ignition[944]: INFO : Ignition finished successfully Jan 29 11:58:52.310969 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:58:52.319699 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:58:52.321357 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:58:52.322820 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:58:52.322904 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:58:52.328757 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:58:52.330772 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:52.330772 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:52.333799 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:52.333409 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:52.335850 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:58:52.342693 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:58:52.360462 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:58:52.360593 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:58:52.362863 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:58:52.364696 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:58:52.366460 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:58:52.367235 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:58:52.382523 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:52.384878 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:58:52.395868 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:52.397108 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:52.399134 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:58:52.400885 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:58:52.400997 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:52.403516 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:58:52.405646 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:58:52.407363 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:58:52.409045 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:52.410956 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:52.412869 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:58:52.414664 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:52.416632 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:58:52.418625 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:58:52.420385 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:58:52.421900 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:58:52.422023 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:52.424326 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:52.425492 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:52.427411 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:58:52.428277 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:52.429508 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:58:52.429633 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:52.432292 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:58:52.432444 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:52.434843 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:58:52.436275 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:58:52.437032 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:52.438359 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:58:52.439983 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:58:52.441692 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:58:52.441817 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:52.443278 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:58:52.443399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:52.445060 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:58:52.445224 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:52.447391 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:58:52.447531 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:58:52.454760 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:58:52.456234 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:58:52.456413 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:52.461817 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:58:52.462931 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:58:52.463113 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:52.468631 ignition[998]: INFO : Ignition 2.19.0 Jan 29 11:58:52.468631 ignition[998]: INFO : Stage: umount Jan 29 11:58:52.468631 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:52.468631 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:58:52.468631 ignition[998]: INFO : umount: umount passed Jan 29 11:58:52.468631 ignition[998]: INFO : Ignition finished successfully Jan 29 11:58:52.465993 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:58:52.466144 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:52.471110 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:58:52.472574 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:58:52.475875 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:58:52.476384 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:58:52.476463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:58:52.480014 systemd[1]: Stopped target network.target - Network. Jan 29 11:58:52.481406 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:58:52.481467 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:58:52.483357 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:58:52.483405 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:58:52.485741 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:58:52.485783 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:58:52.487606 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:58:52.487651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:52.489492 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:58:52.491196 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:58:52.498487 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:58:52.498610 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:58:52.501609 systemd-networkd[766]: eth0: DHCPv6 lease lost Jan 29 11:58:52.502715 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:58:52.502763 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:52.505079 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:58:52.505184 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:58:52.507010 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:58:52.507067 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:52.526665 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:58:52.527540 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:58:52.527629 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:52.529658 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:58:52.529706 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:52.531682 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:58:52.531729 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:52.533953 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:52.537508 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:58:52.537602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:58:52.542329 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:58:52.542372 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:52.546147 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:58:52.546303 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:52.548641 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:58:52.548733 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:58:52.550141 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:58:52.550210 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:52.551755 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:58:52.551789 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:52.553442 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:58:52.553484 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:52.556143 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:58:52.556195 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:52.558853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:52.558897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:52.571690 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:58:52.572726 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:58:52.572787 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:52.574894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:52.574939 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:52.577074 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:58:52.577148 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:58:52.579248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:58:52.581319 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:58:52.590106 systemd[1]: Switching root. Jan 29 11:58:52.620518 systemd-journald[237]: Journal stopped Jan 29 11:58:53.319034 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 11:58:53.319086 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:58:53.319099 kernel: SELinux: policy capability open_perms=1 Jan 29 11:58:53.319112 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:58:53.319121 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:58:53.319130 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:58:53.319141 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:58:53.319160 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:58:53.319171 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:58:53.319181 kernel: audit: type=1403 audit(1738151932.757:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:58:53.319192 systemd[1]: Successfully loaded SELinux policy in 31.272ms. Jan 29 11:58:53.319213 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.270ms. Jan 29 11:58:53.319226 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:53.319237 systemd[1]: Detected virtualization kvm. Jan 29 11:58:53.319248 systemd[1]: Detected architecture arm64. Jan 29 11:58:53.319258 systemd[1]: Detected first boot. Jan 29 11:58:53.319269 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:58:53.319280 zram_generator::config[1044]: No configuration found. Jan 29 11:58:53.319292 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:58:53.319302 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:58:53.319314 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:58:53.319327 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:58:53.319338 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:58:53.319348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:58:53.319359 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:58:53.319369 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:58:53.319380 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:58:53.319390 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:58:53.319401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:58:53.319413 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:58:53.319424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:53.319435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:53.319446 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:58:53.319456 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:58:53.319467 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:58:53.319478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:53.319488 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:58:53.319500 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:53.319511 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:58:53.319521 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:58:53.319532 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:53.319543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:58:53.319629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:53.319642 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:53.319653 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:53.319666 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:53.319676 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:58:53.319687 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:58:53.319697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:53.319708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:53.319718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:53.319729 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:58:53.319739 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:58:53.319749 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:58:53.319761 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:58:53.319772 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:58:53.319830 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:58:53.319844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:58:53.319856 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:58:53.319866 systemd[1]: Reached target machines.target - Containers. Jan 29 11:58:53.319877 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:58:53.319887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:53.319898 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:53.319914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:58:53.319925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:53.319935 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:53.319946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:53.319956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:58:53.319967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:53.319978 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:58:53.319988 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:58:53.320000 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:58:53.320011 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:58:53.320022 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:58:53.320032 kernel: loop: module loaded Jan 29 11:58:53.320042 kernel: fuse: init (API version 7.39) Jan 29 11:58:53.320052 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:53.320062 kernel: ACPI: bus type drm_connector registered Jan 29 11:58:53.320072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:53.320082 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:58:53.320094 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:58:53.320105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:53.320139 systemd-journald[1118]: Collecting audit messages is disabled. Jan 29 11:58:53.320171 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:58:53.320185 systemd[1]: Stopped verity-setup.service. Jan 29 11:58:53.320196 systemd-journald[1118]: Journal started Jan 29 11:58:53.320219 systemd-journald[1118]: Runtime Journal (/run/log/journal/c51021e3670d4dd797dde8b0ae1d9e2c) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:58:53.110259 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:58:53.134733 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:58:53.135080 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:58:53.324623 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:53.325243 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:58:53.326407 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:58:53.327654 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:58:53.328717 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:58:53.329916 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:58:53.331146 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:58:53.333591 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:58:53.335005 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:53.336481 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:58:53.336636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:58:53.339895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:53.340049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:53.341457 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:53.341628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:53.343034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:53.343192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:53.344673 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:58:53.344811 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:58:53.346116 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:53.346268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:53.347845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:53.349222 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:58:53.350749 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:58:53.363142 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:58:53.377659 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:58:53.379871 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:58:53.381020 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:58:53.381061 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:53.383032 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:58:53.385298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:58:53.387444 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:58:53.388637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:53.389927 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:58:53.394746 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:58:53.395957 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:53.396844 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:58:53.398324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:53.399415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:53.401704 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:58:53.410312 systemd-journald[1118]: Time spent on flushing to /var/log/journal/c51021e3670d4dd797dde8b0ae1d9e2c is 26.672ms for 853 entries. Jan 29 11:58:53.410312 systemd-journald[1118]: System Journal (/var/log/journal/c51021e3670d4dd797dde8b0ae1d9e2c) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:58:53.446253 systemd-journald[1118]: Received client request to flush runtime journal. Jan 29 11:58:53.446405 kernel: loop0: detected capacity change from 0 to 114432 Jan 29 11:58:53.446562 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:58:53.411704 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:58:53.414654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:53.416233 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:58:53.417818 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:58:53.420069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:58:53.423950 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:58:53.429075 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:58:53.437798 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:58:53.445254 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:58:53.450473 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:58:53.452455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:53.459903 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:58:53.466858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:53.468843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:58:53.470187 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:58:53.472579 kernel: loop1: detected capacity change from 0 to 189592 Jan 29 11:58:53.473496 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:58:53.490353 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 29 11:58:53.490373 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 29 11:58:53.496172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:53.501620 kernel: loop2: detected capacity change from 0 to 114328 Jan 29 11:58:53.537894 kernel: loop3: detected capacity change from 0 to 114432 Jan 29 11:58:53.541940 kernel: loop4: detected capacity change from 0 to 189592 Jan 29 11:58:53.547581 kernel: loop5: detected capacity change from 0 to 114328 Jan 29 11:58:53.550755 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:58:53.551115 (sd-merge)[1180]: Merged extensions into '/usr'. Jan 29 11:58:53.554103 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:58:53.554117 systemd[1]: Reloading... Jan 29 11:58:53.600099 zram_generator::config[1206]: No configuration found. Jan 29 11:58:53.666900 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:58:53.691988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:53.727258 systemd[1]: Reloading finished in 172 ms. Jan 29 11:58:53.762714 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:58:53.764301 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:58:53.782744 systemd[1]: Starting ensure-sysext.service... Jan 29 11:58:53.784757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:53.794755 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:58:53.794777 systemd[1]: Reloading... Jan 29 11:58:53.808112 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:58:53.808381 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:58:53.809028 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:58:53.809254 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 29 11:58:53.809307 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 29 11:58:53.811703 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:53.811716 systemd-tmpfiles[1241]: Skipping /boot Jan 29 11:58:53.821670 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:53.821686 systemd-tmpfiles[1241]: Skipping /boot Jan 29 11:58:53.837583 zram_generator::config[1271]: No configuration found. Jan 29 11:58:53.920739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:53.956215 systemd[1]: Reloading finished in 161 ms. Jan 29 11:58:53.971598 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:58:53.983906 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:53.991508 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:58:53.993843 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:58:53.996072 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:58:54.000840 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:54.003852 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:54.005934 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:58:54.012248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:54.013220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:54.016483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:54.022832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:54.026776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:54.030295 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:58:54.032293 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:58:54.034120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:54.034270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:54.035878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:54.035991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:54.044861 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:54.045670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:54.049366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:54.050800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:54.056582 augenrules[1332]: No rules Jan 29 11:58:54.054754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:54.056127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:54.057289 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:58:54.060374 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:58:54.062922 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jan 29 11:58:54.068307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:58:54.071485 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:58:54.073303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:54.073429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:54.075123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:54.075261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:54.080834 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:58:54.082395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:54.084208 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:58:54.096957 systemd[1]: Finished ensure-sysext.service. Jan 29 11:58:54.103321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:54.104403 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:54.108081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:54.110814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:54.116717 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:54.117933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:54.123578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1350) Jan 29 11:58:54.123860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:54.130275 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:58:54.132711 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:58:54.133180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:54.133308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:54.134920 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:54.136689 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:54.138088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:54.138235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:54.141099 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:54.141271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:54.161623 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:58:54.162425 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:54.162479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:54.166641 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:58:54.177705 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:58:54.188659 systemd-resolved[1308]: Positive Trust Anchors: Jan 29 11:58:54.188678 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:54.188710 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:54.206690 systemd-resolved[1308]: Defaulting to hostname 'linux'. Jan 29 11:58:54.207851 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:58:54.209391 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:54.211956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:54.213603 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:58:54.215101 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:58:54.222172 systemd-networkd[1379]: lo: Link UP Jan 29 11:58:54.222396 systemd-networkd[1379]: lo: Gained carrier Jan 29 11:58:54.223151 systemd-networkd[1379]: Enumeration completed Jan 29 11:58:54.223283 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:54.223967 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:54.224045 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:54.224497 systemd[1]: Reached target network.target - Network. Jan 29 11:58:54.224833 systemd-networkd[1379]: eth0: Link UP Jan 29 11:58:54.224891 systemd-networkd[1379]: eth0: Gained carrier Jan 29 11:58:54.224950 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:54.230754 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:58:54.240621 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:58:54.241536 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Jan 29 11:58:54.242668 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:58:54.242723 systemd-timesyncd[1380]: Initial clock synchronization to Wed 2025-01-29 11:58:53.980878 UTC. Jan 29 11:58:54.266776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:54.276895 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:58:54.279346 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:58:54.303653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:54.311163 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:58:54.343641 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:58:54.345039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:54.346171 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:54.347306 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:58:54.348536 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:58:54.349905 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:58:54.351079 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:58:54.352309 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:58:54.353519 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:58:54.353567 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:54.354422 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:54.356029 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:58:54.358294 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:58:54.372452 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:58:54.374522 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:58:54.376058 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:58:54.377212 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:54.378273 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:54.379229 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:58:54.379262 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:58:54.380069 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:58:54.381829 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:58:54.382137 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:58:54.384650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:58:54.386586 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:58:54.387949 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:58:54.389788 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:58:54.394292 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:58:54.396698 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:58:54.402764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:58:54.407744 jq[1411]: false Jan 29 11:58:54.407832 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:58:54.409859 extend-filesystems[1412]: Found loop3 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found loop4 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found loop5 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda1 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda2 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda3 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found usr Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda4 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda6 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda7 Jan 29 11:58:54.412710 extend-filesystems[1412]: Found vda9 Jan 29 11:58:54.412710 extend-filesystems[1412]: Checking size of /dev/vda9 Jan 29 11:58:54.429959 dbus-daemon[1410]: [system] SELinux support is enabled Jan 29 11:58:54.437161 extend-filesystems[1412]: Resized partition /dev/vda9 Jan 29 11:58:54.417339 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:58:54.417758 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:58:54.443880 jq[1429]: true Jan 29 11:58:54.418414 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:58:54.423688 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:58:54.427981 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:58:54.430781 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:58:54.438360 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:58:54.438500 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:58:54.438847 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:58:54.439062 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:58:54.444607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Jan 29 11:58:54.444698 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:58:54.452877 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:58:54.453073 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:58:54.459118 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:58:54.474914 tar[1436]: linux-arm64/helm Jan 29 11:58:54.476172 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:58:54.476202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:58:54.477602 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:58:54.477621 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:58:54.477885 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:58:54.479956 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:58:54.480531 systemd-logind[1420]: New seat seat0. Jan 29 11:58:54.482290 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:58:54.487174 jq[1438]: true Jan 29 11:58:54.491587 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:58:54.492681 update_engine[1427]: I20250129 11:58:54.492462 1427 main.cc:92] Flatcar Update Engine starting Jan 29 11:58:54.497702 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:58:54.500972 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:58:54.502373 update_engine[1427]: I20250129 11:58:54.497874 1427 update_check_scheduler.cc:74] Next update check in 5m33s Jan 29 11:58:54.504576 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:58:54.504576 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:58:54.504576 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:58:54.503962 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:58:54.514773 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jan 29 11:58:54.504640 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:58:54.559523 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:58:54.563590 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:58:54.565455 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:58:54.588726 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:58:54.680859 containerd[1439]: time="2025-01-29T11:58:54.680665240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:58:54.709609 containerd[1439]: time="2025-01-29T11:58:54.708658680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710323 containerd[1439]: time="2025-01-29T11:58:54.710281080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710323 containerd[1439]: time="2025-01-29T11:58:54.710317600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:58:54.710384 containerd[1439]: time="2025-01-29T11:58:54.710332840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:58:54.710493 containerd[1439]: time="2025-01-29T11:58:54.710472960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:58:54.710584 containerd[1439]: time="2025-01-29T11:58:54.710548400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710653 containerd[1439]: time="2025-01-29T11:58:54.710633600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710678 containerd[1439]: time="2025-01-29T11:58:54.710651840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710892 containerd[1439]: time="2025-01-29T11:58:54.710866560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710932 containerd[1439]: time="2025-01-29T11:58:54.710891600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710932 containerd[1439]: time="2025-01-29T11:58:54.710905640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:54.710932 containerd[1439]: time="2025-01-29T11:58:54.710914880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:54.711010 containerd[1439]: time="2025-01-29T11:58:54.710990960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:54.711356 containerd[1439]: time="2025-01-29T11:58:54.711275880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:54.711475 containerd[1439]: time="2025-01-29T11:58:54.711453480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:54.711509 containerd[1439]: time="2025-01-29T11:58:54.711473680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:58:54.711647 containerd[1439]: time="2025-01-29T11:58:54.711625240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:58:54.711703 containerd[1439]: time="2025-01-29T11:58:54.711686920Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:58:54.715316 containerd[1439]: time="2025-01-29T11:58:54.715278600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:58:54.715371 containerd[1439]: time="2025-01-29T11:58:54.715339240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:58:54.715371 containerd[1439]: time="2025-01-29T11:58:54.715356160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:58:54.715371 containerd[1439]: time="2025-01-29T11:58:54.715369920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:58:54.715439 containerd[1439]: time="2025-01-29T11:58:54.715383200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:58:54.715607 containerd[1439]: time="2025-01-29T11:58:54.715584280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:58:54.715999 containerd[1439]: time="2025-01-29T11:58:54.715928120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716092480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716125600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716141600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716156040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716178320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716192840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716206320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716223760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716235880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716247320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716259040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716277680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716290080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716476 containerd[1439]: time="2025-01-29T11:58:54.716304360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716315840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716328400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716341280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716353200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716365960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716381400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716397080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716408680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716421120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716432440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716449880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716473240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716485000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.716763 containerd[1439]: time="2025-01-29T11:58:54.716497200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:58:54.717799 containerd[1439]: time="2025-01-29T11:58:54.717542680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:58:54.717835 containerd[1439]: time="2025-01-29T11:58:54.717810560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:58:54.717835 containerd[1439]: time="2025-01-29T11:58:54.717825040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:58:54.717871 containerd[1439]: time="2025-01-29T11:58:54.717836120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:58:54.717871 containerd[1439]: time="2025-01-29T11:58:54.717848560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.717871 containerd[1439]: time="2025-01-29T11:58:54.717864360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:58:54.717933 containerd[1439]: time="2025-01-29T11:58:54.717874400Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:58:54.717933 containerd[1439]: time="2025-01-29T11:58:54.717885960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:58:54.718496 containerd[1439]: time="2025-01-29T11:58:54.718311200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:58:54.718496 containerd[1439]: time="2025-01-29T11:58:54.718379680Z" level=info msg="Connect containerd service" Jan 29 11:58:54.718496 containerd[1439]: time="2025-01-29T11:58:54.718406000Z" level=info msg="using legacy CRI server" Jan 29 11:58:54.718496 containerd[1439]: time="2025-01-29T11:58:54.718412200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:58:54.718496 containerd[1439]: time="2025-01-29T11:58:54.718490160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:58:54.719173 containerd[1439]: time="2025-01-29T11:58:54.719111400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:58:54.719459 containerd[1439]: time="2025-01-29T11:58:54.719310160Z" level=info msg="Start subscribing containerd event" Jan 29 11:58:54.719459 containerd[1439]: time="2025-01-29T11:58:54.719349040Z" level=info msg="Start recovering state" Jan 29 11:58:54.719459 containerd[1439]: time="2025-01-29T11:58:54.719402680Z" level=info msg="Start event monitor" Jan 29 11:58:54.719459 containerd[1439]: time="2025-01-29T11:58:54.719412760Z" level=info msg="Start snapshots syncer" Jan 29 11:58:54.719459 containerd[1439]: time="2025-01-29T11:58:54.719420960Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:58:54.719459 containerd[1439]: time="2025-01-29T11:58:54.719427520Z" level=info msg="Start streaming server" Jan 29 11:58:54.720105 containerd[1439]: time="2025-01-29T11:58:54.720013080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:58:54.720105 containerd[1439]: time="2025-01-29T11:58:54.720060240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:58:54.720105 containerd[1439]: time="2025-01-29T11:58:54.720103800Z" level=info msg="containerd successfully booted in 0.040793s" Jan 29 11:58:54.720175 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:58:54.836909 tar[1436]: linux-arm64/LICENSE Jan 29 11:58:54.836909 tar[1436]: linux-arm64/README.md Jan 29 11:58:54.849840 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:58:55.179304 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:58:55.197009 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:58:55.206807 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:58:55.211533 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:58:55.211721 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:58:55.216084 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:58:55.225869 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:58:55.228322 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:58:55.231630 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:58:55.232830 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:58:56.041712 systemd-networkd[1379]: eth0: Gained IPv6LL Jan 29 11:58:56.044300 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:58:56.046362 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:58:56.057811 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:58:56.060014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:56.061923 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:58:56.075856 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:58:56.076796 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:58:56.078310 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:58:56.079469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:58:56.548146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:56.549662 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:58:56.551249 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:58:56.554613 systemd[1]: Startup finished in 557ms (kernel) + 5.049s (initrd) + 3.834s (userspace) = 9.441s. Jan 29 11:58:56.979453 kubelet[1524]: E0129 11:58:56.979344 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:58:56.981776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:58:56.981917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:00.249227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:59:00.250317 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:36068.service - OpenSSH per-connection server daemon (10.0.0.1:36068). Jan 29 11:59:00.300572 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 36068 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:00.302256 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:00.309272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:59:00.318788 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:59:00.320491 systemd-logind[1420]: New session 1 of user core. Jan 29 11:59:00.328586 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:59:00.330628 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:59:00.336685 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:59:00.404262 systemd[1541]: Queued start job for default target default.target. Jan 29 11:59:00.422485 systemd[1541]: Created slice app.slice - User Application Slice. Jan 29 11:59:00.422528 systemd[1541]: Reached target paths.target - Paths. Jan 29 11:59:00.422547 systemd[1541]: Reached target timers.target - Timers. Jan 29 11:59:00.423746 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:59:00.433121 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:59:00.433179 systemd[1541]: Reached target sockets.target - Sockets. Jan 29 11:59:00.433191 systemd[1541]: Reached target basic.target - Basic System. Jan 29 11:59:00.433225 systemd[1541]: Reached target default.target - Main User Target. Jan 29 11:59:00.433249 systemd[1541]: Startup finished in 91ms. Jan 29 11:59:00.433475 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:59:00.434896 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:59:00.493891 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:36076.service - OpenSSH per-connection server daemon (10.0.0.1:36076). Jan 29 11:59:00.530776 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 36076 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:00.532062 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:00.536152 systemd-logind[1420]: New session 2 of user core. Jan 29 11:59:00.545699 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:59:00.595930 sshd[1552]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:00.606738 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:36076.service: Deactivated successfully. Jan 29 11:59:00.608073 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:59:00.609243 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:59:00.610264 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:36090.service - OpenSSH per-connection server daemon (10.0.0.1:36090). Jan 29 11:59:00.610942 systemd-logind[1420]: Removed session 2. Jan 29 11:59:00.644307 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 36090 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:00.645361 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:00.648819 systemd-logind[1420]: New session 3 of user core. Jan 29 11:59:00.660751 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:59:00.708511 sshd[1559]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:00.716728 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:36090.service: Deactivated successfully. Jan 29 11:59:00.718925 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:59:00.720102 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:59:00.721707 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:36098.service - OpenSSH per-connection server daemon (10.0.0.1:36098). Jan 29 11:59:00.722385 systemd-logind[1420]: Removed session 3. Jan 29 11:59:00.756121 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 36098 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:00.757411 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:00.761178 systemd-logind[1420]: New session 4 of user core. Jan 29 11:59:00.769676 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:59:00.819037 sshd[1566]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:00.832726 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:36098.service: Deactivated successfully. Jan 29 11:59:00.834085 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:59:00.835253 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:59:00.836272 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:36112.service - OpenSSH per-connection server daemon (10.0.0.1:36112). Jan 29 11:59:00.837038 systemd-logind[1420]: Removed session 4. Jan 29 11:59:00.871019 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 36112 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:00.872145 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:00.875691 systemd-logind[1420]: New session 5 of user core. Jan 29 11:59:00.887664 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:59:00.949853 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:59:00.950131 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:59:00.963177 sudo[1576]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:00.966470 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:00.975726 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:36112.service: Deactivated successfully. Jan 29 11:59:00.977103 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:59:00.978313 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:59:00.979495 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:36114.service - OpenSSH per-connection server daemon (10.0.0.1:36114). Jan 29 11:59:00.980134 systemd-logind[1420]: Removed session 5. Jan 29 11:59:01.014314 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 36114 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:01.015504 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:01.018756 systemd-logind[1420]: New session 6 of user core. Jan 29 11:59:01.025673 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:59:01.075171 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:59:01.075443 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:59:01.078451 sudo[1585]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:01.082844 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:59:01.083109 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:59:01.103055 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:59:01.103928 auditctl[1588]: No rules Jan 29 11:59:01.104239 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:59:01.105572 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:59:01.107519 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:59:01.129011 augenrules[1606]: No rules Jan 29 11:59:01.129713 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:59:01.130584 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:01.132752 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:01.140794 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:36114.service: Deactivated successfully. Jan 29 11:59:01.142049 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:59:01.143276 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:59:01.144267 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:36122.service - OpenSSH per-connection server daemon (10.0.0.1:36122). Jan 29 11:59:01.144931 systemd-logind[1420]: Removed session 6. Jan 29 11:59:01.178530 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 36122 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:01.179649 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:01.182895 systemd-logind[1420]: New session 7 of user core. Jan 29 11:59:01.194680 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:59:01.243204 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:59:01.243800 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:59:01.552791 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:59:01.552904 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:59:01.812719 dockerd[1635]: time="2025-01-29T11:59:01.812596615Z" level=info msg="Starting up" Jan 29 11:59:01.944031 dockerd[1635]: time="2025-01-29T11:59:01.943816808Z" level=info msg="Loading containers: start." Jan 29 11:59:02.032939 kernel: Initializing XFRM netlink socket Jan 29 11:59:02.094232 systemd-networkd[1379]: docker0: Link UP Jan 29 11:59:02.108802 dockerd[1635]: time="2025-01-29T11:59:02.108749091Z" level=info msg="Loading containers: done." Jan 29 11:59:02.123097 dockerd[1635]: time="2025-01-29T11:59:02.123047067Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:59:02.123248 dockerd[1635]: time="2025-01-29T11:59:02.123149929Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:59:02.123276 dockerd[1635]: time="2025-01-29T11:59:02.123251290Z" level=info msg="Daemon has completed initialization" Jan 29 11:59:02.148563 dockerd[1635]: time="2025-01-29T11:59:02.148428521Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:59:02.148657 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:59:02.791896 containerd[1439]: time="2025-01-29T11:59:02.791857617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:59:03.587486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270930286.mount: Deactivated successfully. Jan 29 11:59:04.971280 containerd[1439]: time="2025-01-29T11:59:04.971234109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:04.972291 containerd[1439]: time="2025-01-29T11:59:04.971823298Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 29 11:59:04.973074 containerd[1439]: time="2025-01-29T11:59:04.973029004Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:04.976634 containerd[1439]: time="2025-01-29T11:59:04.976597961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:04.977564 containerd[1439]: time="2025-01-29T11:59:04.977441108Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.1855422s" Jan 29 11:59:04.977564 containerd[1439]: time="2025-01-29T11:59:04.977477348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 11:59:04.978145 containerd[1439]: time="2025-01-29T11:59:04.978120363Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:59:06.490777 containerd[1439]: time="2025-01-29T11:59:06.490718575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:06.491476 containerd[1439]: time="2025-01-29T11:59:06.491400800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 29 11:59:06.492080 containerd[1439]: time="2025-01-29T11:59:06.492033481Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:06.495285 containerd[1439]: time="2025-01-29T11:59:06.495252661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:06.496373 containerd[1439]: time="2025-01-29T11:59:06.496320124Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.518166627s" Jan 29 11:59:06.496373 containerd[1439]: time="2025-01-29T11:59:06.496368397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 11:59:06.497004 containerd[1439]: time="2025-01-29T11:59:06.496881227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:59:07.232188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:59:07.242711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:07.329085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:07.332447 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:07.366534 kubelet[1849]: E0129 11:59:07.366483 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:07.369001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:07.369142 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:08.097317 containerd[1439]: time="2025-01-29T11:59:08.097255306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:08.097734 containerd[1439]: time="2025-01-29T11:59:08.097700843Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 29 11:59:08.098507 containerd[1439]: time="2025-01-29T11:59:08.098478415Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:08.101267 containerd[1439]: time="2025-01-29T11:59:08.101229148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:08.103443 containerd[1439]: time="2025-01-29T11:59:08.103411853Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.606479476s" Jan 29 11:59:08.103497 containerd[1439]: time="2025-01-29T11:59:08.103447368Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 11:59:08.103883 containerd[1439]: time="2025-01-29T11:59:08.103859180Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:59:09.410438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992578255.mount: Deactivated successfully. Jan 29 11:59:09.611524 containerd[1439]: time="2025-01-29T11:59:09.611474407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:09.611962 containerd[1439]: time="2025-01-29T11:59:09.611938733Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 29 11:59:09.612588 containerd[1439]: time="2025-01-29T11:59:09.612564561Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:09.614385 containerd[1439]: time="2025-01-29T11:59:09.614332797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:09.615138 containerd[1439]: time="2025-01-29T11:59:09.615054300Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.511168308s" Jan 29 11:59:09.615138 containerd[1439]: time="2025-01-29T11:59:09.615089760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 11:59:09.615798 containerd[1439]: time="2025-01-29T11:59:09.615595715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:59:10.356602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165733541.mount: Deactivated successfully. Jan 29 11:59:11.280053 containerd[1439]: time="2025-01-29T11:59:11.279995682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:11.280480 containerd[1439]: time="2025-01-29T11:59:11.280442589Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 11:59:11.281369 containerd[1439]: time="2025-01-29T11:59:11.281336883Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:11.284580 containerd[1439]: time="2025-01-29T11:59:11.284519027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:11.285734 containerd[1439]: time="2025-01-29T11:59:11.285703276Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.67007993s" Jan 29 11:59:11.285772 containerd[1439]: time="2025-01-29T11:59:11.285734157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:59:11.286360 containerd[1439]: time="2025-01-29T11:59:11.286202143Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:59:11.879140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034077142.mount: Deactivated successfully. Jan 29 11:59:11.883130 containerd[1439]: time="2025-01-29T11:59:11.882964867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:11.883673 containerd[1439]: time="2025-01-29T11:59:11.883438471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 11:59:11.884357 containerd[1439]: time="2025-01-29T11:59:11.884303517Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:11.886511 containerd[1439]: time="2025-01-29T11:59:11.886450953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:11.887201 containerd[1439]: time="2025-01-29T11:59:11.887131276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 600.900723ms" Jan 29 11:59:11.887201 containerd[1439]: time="2025-01-29T11:59:11.887161479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:59:11.887551 containerd[1439]: time="2025-01-29T11:59:11.887529691Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:59:12.572900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212543906.mount: Deactivated successfully. Jan 29 11:59:14.627973 containerd[1439]: time="2025-01-29T11:59:14.627930089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:14.628959 containerd[1439]: time="2025-01-29T11:59:14.628719282Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 29 11:59:14.629723 containerd[1439]: time="2025-01-29T11:59:14.629688369Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:14.633057 containerd[1439]: time="2025-01-29T11:59:14.633024556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:14.635455 containerd[1439]: time="2025-01-29T11:59:14.635416871Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.747847244s" Jan 29 11:59:14.635501 containerd[1439]: time="2025-01-29T11:59:14.635459281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 11:59:17.577665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:59:17.591751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:17.724930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:17.728192 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:17.771808 kubelet[2001]: E0129 11:59:17.771756 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:17.774373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:17.774504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:19.909151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:19.923780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:19.942343 systemd[1]: Reloading requested from client PID 2017 ('systemctl') (unit session-7.scope)... Jan 29 11:59:19.942362 systemd[1]: Reloading... Jan 29 11:59:20.003575 zram_generator::config[2057]: No configuration found. Jan 29 11:59:20.205027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:59:20.255892 systemd[1]: Reloading finished in 313 ms. Jan 29 11:59:20.292721 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:20.296204 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:59:20.296376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:20.298808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:20.393499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:20.397114 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:59:20.433710 kubelet[2103]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:59:20.433710 kubelet[2103]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:59:20.433710 kubelet[2103]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:59:20.433996 kubelet[2103]: I0129 11:59:20.433821 2103 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:59:21.101656 kubelet[2103]: I0129 11:59:21.101621 2103 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:59:21.101656 kubelet[2103]: I0129 11:59:21.101648 2103 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:59:21.102150 kubelet[2103]: I0129 11:59:21.101961 2103 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:59:21.134962 kubelet[2103]: E0129 11:59:21.134925 2103 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:21.136710 kubelet[2103]: I0129 11:59:21.136678 2103 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:59:21.142622 kubelet[2103]: E0129 11:59:21.142512 2103 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:59:21.142622 kubelet[2103]: I0129 11:59:21.142622 2103 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:59:21.145836 kubelet[2103]: I0129 11:59:21.145804 2103 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:59:21.146639 kubelet[2103]: I0129 11:59:21.146613 2103 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:59:21.146784 kubelet[2103]: I0129 11:59:21.146749 2103 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:59:21.146930 kubelet[2103]: I0129 11:59:21.146775 2103 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:59:21.147010 kubelet[2103]: I0129 11:59:21.146993 2103 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:59:21.147010 kubelet[2103]: I0129 11:59:21.147002 2103 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:59:21.147185 kubelet[2103]: I0129 11:59:21.147162 2103 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:59:21.148820 kubelet[2103]: I0129 11:59:21.148798 2103 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:59:21.148820 kubelet[2103]: I0129 11:59:21.148824 2103 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:59:21.151204 kubelet[2103]: I0129 11:59:21.148917 2103 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:59:21.151204 kubelet[2103]: I0129 11:59:21.148930 2103 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:59:21.152692 kubelet[2103]: I0129 11:59:21.152489 2103 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:59:21.154042 kubelet[2103]: W0129 11:59:21.153987 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:21.154118 kubelet[2103]: E0129 11:59:21.154050 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:21.154787 kubelet[2103]: I0129 11:59:21.154710 2103 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:59:21.155433 kubelet[2103]: W0129 11:59:21.155257 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:21.155433 kubelet[2103]: E0129 11:59:21.155295 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:21.155659 kubelet[2103]: W0129 11:59:21.155623 2103 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:59:21.156379 kubelet[2103]: I0129 11:59:21.156361 2103 server.go:1269] "Started kubelet" Jan 29 11:59:21.158650 kubelet[2103]: I0129 11:59:21.157799 2103 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:59:21.162881 kubelet[2103]: I0129 11:59:21.162844 2103 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:59:21.165397 kubelet[2103]: I0129 11:59:21.165364 2103 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:59:21.165870 kubelet[2103]: I0129 11:59:21.165806 2103 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:59:21.166039 kubelet[2103]: I0129 11:59:21.166018 2103 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:59:21.166080 kubelet[2103]: I0129 11:59:21.166060 2103 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:59:21.166250 kubelet[2103]: I0129 11:59:21.166228 2103 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:59:21.166302 kubelet[2103]: I0129 11:59:21.166293 2103 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:59:21.166335 kubelet[2103]: I0129 11:59:21.166315 2103 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:59:21.168094 kubelet[2103]: E0129 11:59:21.168034 2103 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:59:21.168450 kubelet[2103]: E0129 11:59:21.163720 2103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f27ffe25f4138 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:59:21.156337976 +0000 UTC m=+0.756452665,LastTimestamp:2025-01-29 11:59:21.156337976 +0000 UTC m=+0.756452665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:59:21.169258 kubelet[2103]: E0129 11:59:21.169206 2103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Jan 29 11:59:21.169778 kubelet[2103]: I0129 11:59:21.169756 2103 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:59:21.170091 kubelet[2103]: I0129 11:59:21.170071 2103 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:59:21.171687 kubelet[2103]: W0129 11:59:21.171639 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:21.171764 kubelet[2103]: E0129 11:59:21.171687 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:21.173055 kubelet[2103]: E0129 11:59:21.172959 2103 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:59:21.173289 kubelet[2103]: I0129 11:59:21.173181 2103 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:59:21.179401 kubelet[2103]: I0129 11:59:21.179351 2103 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:59:21.180350 kubelet[2103]: I0129 11:59:21.180319 2103 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:59:21.180350 kubelet[2103]: I0129 11:59:21.180338 2103 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:59:21.180350 kubelet[2103]: I0129 11:59:21.180351 2103 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:59:21.180489 kubelet[2103]: E0129 11:59:21.180390 2103 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:59:21.185897 kubelet[2103]: I0129 11:59:21.185834 2103 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:59:21.185897 kubelet[2103]: I0129 11:59:21.185864 2103 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:59:21.185897 kubelet[2103]: I0129 11:59:21.185880 2103 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:59:21.186043 kubelet[2103]: W0129 11:59:21.185853 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:21.186188 kubelet[2103]: E0129 11:59:21.186169 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:21.252307 kubelet[2103]: I0129 11:59:21.252275 2103 policy_none.go:49] "None policy: Start" Jan 29 11:59:21.253284 kubelet[2103]: I0129 11:59:21.253210 2103 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:59:21.253284 kubelet[2103]: I0129 11:59:21.253239 2103 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:59:21.259402 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:59:21.268680 kubelet[2103]: E0129 11:59:21.268656 2103 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:59:21.269122 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:59:21.271706 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:59:21.280686 kubelet[2103]: E0129 11:59:21.280648 2103 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:59:21.282267 kubelet[2103]: I0129 11:59:21.282240 2103 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:59:21.282615 kubelet[2103]: I0129 11:59:21.282601 2103 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:59:21.282677 kubelet[2103]: I0129 11:59:21.282615 2103 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:59:21.282970 kubelet[2103]: I0129 11:59:21.282836 2103 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:59:21.283965 kubelet[2103]: E0129 11:59:21.283946 2103 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:59:21.370369 kubelet[2103]: E0129 11:59:21.370264 2103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Jan 29 11:59:21.384350 kubelet[2103]: I0129 11:59:21.384304 2103 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:59:21.384689 kubelet[2103]: E0129 11:59:21.384657 2103 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 29 11:59:21.487642 systemd[1]: Created slice kubepods-burstable-podb380497567ec568b158be0ed43ecf74f.slice - libcontainer container kubepods-burstable-podb380497567ec568b158be0ed43ecf74f.slice. Jan 29 11:59:21.498895 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:59:21.514854 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:59:21.585989 kubelet[2103]: I0129 11:59:21.585972 2103 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:59:21.586253 kubelet[2103]: E0129 11:59:21.586221 2103 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 29 11:59:21.667492 kubelet[2103]: I0129 11:59:21.667420 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:21.667492 kubelet[2103]: I0129 11:59:21.667453 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:59:21.667492 kubelet[2103]: I0129 11:59:21.667474 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b380497567ec568b158be0ed43ecf74f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b380497567ec568b158be0ed43ecf74f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:59:21.667492 kubelet[2103]: I0129 11:59:21.667489 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:21.667626 kubelet[2103]: I0129 11:59:21.667513 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:21.667626 kubelet[2103]: I0129 11:59:21.667537 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:21.667626 kubelet[2103]: I0129 11:59:21.667567 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b380497567ec568b158be0ed43ecf74f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b380497567ec568b158be0ed43ecf74f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:59:21.667626 kubelet[2103]: I0129 11:59:21.667584 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b380497567ec568b158be0ed43ecf74f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b380497567ec568b158be0ed43ecf74f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:59:21.667626 kubelet[2103]: I0129 11:59:21.667603 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:21.771182 kubelet[2103]: E0129 11:59:21.771126 2103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Jan 29 11:59:21.798535 kubelet[2103]: E0129 11:59:21.798512 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:21.799193 containerd[1439]: time="2025-01-29T11:59:21.799152822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b380497567ec568b158be0ed43ecf74f,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:21.801411 kubelet[2103]: E0129 11:59:21.801363 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:21.801792 containerd[1439]: time="2025-01-29T11:59:21.801722170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:21.817161 kubelet[2103]: E0129 11:59:21.817118 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:21.817504 containerd[1439]: time="2025-01-29T11:59:21.817477029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:21.987509 kubelet[2103]: I0129 11:59:21.987305 2103 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:59:21.987815 kubelet[2103]: E0129 11:59:21.987676 2103 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 29 11:59:22.125519 kubelet[2103]: W0129 11:59:22.125475 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:22.125608 kubelet[2103]: E0129 11:59:22.125530 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:22.157787 kubelet[2103]: W0129 11:59:22.157762 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:22.157877 kubelet[2103]: E0129 11:59:22.157796 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:22.206708 kubelet[2103]: W0129 11:59:22.206661 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:22.206766 kubelet[2103]: E0129 11:59:22.206712 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:22.400333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482170955.mount: Deactivated successfully. Jan 29 11:59:22.405259 containerd[1439]: time="2025-01-29T11:59:22.404946237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:59:22.406255 containerd[1439]: time="2025-01-29T11:59:22.406222102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:59:22.406739 containerd[1439]: time="2025-01-29T11:59:22.406700796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:59:22.407567 containerd[1439]: time="2025-01-29T11:59:22.407500605Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:59:22.408696 containerd[1439]: time="2025-01-29T11:59:22.408666727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:59:22.408963 containerd[1439]: time="2025-01-29T11:59:22.408921820Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:59:22.409198 containerd[1439]: time="2025-01-29T11:59:22.409171278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:59:22.411297 containerd[1439]: time="2025-01-29T11:59:22.411267653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:59:22.413269 containerd[1439]: time="2025-01-29T11:59:22.413243815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 613.995609ms" Jan 29 11:59:22.414153 containerd[1439]: time="2025-01-29T11:59:22.413897713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 612.112606ms" Jan 29 11:59:22.415707 containerd[1439]: time="2025-01-29T11:59:22.415677890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.147314ms" Jan 29 11:59:22.573757 kubelet[2103]: E0129 11:59:22.572343 2103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Jan 29 11:59:22.603821 containerd[1439]: time="2025-01-29T11:59:22.603070692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:22.603821 containerd[1439]: time="2025-01-29T11:59:22.603143348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:22.603821 containerd[1439]: time="2025-01-29T11:59:22.603158534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:22.603821 containerd[1439]: time="2025-01-29T11:59:22.603345048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:22.603821 containerd[1439]: time="2025-01-29T11:59:22.603455670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:22.603821 containerd[1439]: time="2025-01-29T11:59:22.603506704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:22.604223 containerd[1439]: time="2025-01-29T11:59:22.604159963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:22.604715 containerd[1439]: time="2025-01-29T11:59:22.604250722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:22.605071 containerd[1439]: time="2025-01-29T11:59:22.604751717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:22.605122 containerd[1439]: time="2025-01-29T11:59:22.605096810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:22.605171 containerd[1439]: time="2025-01-29T11:59:22.605129581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:22.605259 containerd[1439]: time="2025-01-29T11:59:22.605219740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:22.625714 systemd[1]: Started cri-containerd-dc3bc1171bb421d96e3cf48d43f562e7311e0bf8f4ab5df5e85c4ebddd28f6fa.scope - libcontainer container dc3bc1171bb421d96e3cf48d43f562e7311e0bf8f4ab5df5e85c4ebddd28f6fa. Jan 29 11:59:22.630157 systemd[1]: Started cri-containerd-5f1f7b55bde0a7879ae38a3e986ebc9a9440788afd08e25c6c8831f3e4ff043e.scope - libcontainer container 5f1f7b55bde0a7879ae38a3e986ebc9a9440788afd08e25c6c8831f3e4ff043e. Jan 29 11:59:22.631445 systemd[1]: Started cri-containerd-667beb086c1a94f18cc4d61b37cdf102020235e6e4de530d01443b9ab8a6b4bd.scope - libcontainer container 667beb086c1a94f18cc4d61b37cdf102020235e6e4de530d01443b9ab8a6b4bd. Jan 29 11:59:22.659331 containerd[1439]: time="2025-01-29T11:59:22.659171301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc3bc1171bb421d96e3cf48d43f562e7311e0bf8f4ab5df5e85c4ebddd28f6fa\"" Jan 29 11:59:22.660066 containerd[1439]: time="2025-01-29T11:59:22.660029458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b380497567ec568b158be0ed43ecf74f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f1f7b55bde0a7879ae38a3e986ebc9a9440788afd08e25c6c8831f3e4ff043e\"" Jan 29 11:59:22.662615 kubelet[2103]: E0129 11:59:22.662436 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:22.662885 kubelet[2103]: E0129 11:59:22.662632 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:22.667737 containerd[1439]: time="2025-01-29T11:59:22.667507085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"667beb086c1a94f18cc4d61b37cdf102020235e6e4de530d01443b9ab8a6b4bd\"" Jan 29 11:59:22.668107 containerd[1439]: time="2025-01-29T11:59:22.667992853Z" level=info msg="CreateContainer within sandbox \"5f1f7b55bde0a7879ae38a3e986ebc9a9440788afd08e25c6c8831f3e4ff043e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:59:22.668735 kubelet[2103]: E0129 11:59:22.668654 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:22.668791 containerd[1439]: time="2025-01-29T11:59:22.668686636Z" level=info msg="CreateContainer within sandbox \"dc3bc1171bb421d96e3cf48d43f562e7311e0bf8f4ab5df5e85c4ebddd28f6fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:59:22.671082 containerd[1439]: time="2025-01-29T11:59:22.671046936Z" level=info msg="CreateContainer within sandbox \"667beb086c1a94f18cc4d61b37cdf102020235e6e4de530d01443b9ab8a6b4bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:59:22.687030 containerd[1439]: time="2025-01-29T11:59:22.686996586Z" level=info msg="CreateContainer within sandbox \"667beb086c1a94f18cc4d61b37cdf102020235e6e4de530d01443b9ab8a6b4bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9a67053ca2ebf76b4f1246fdb3e00285ed60c36e58326c5390089bdba9cb1bd4\"" Jan 29 11:59:22.687444 containerd[1439]: time="2025-01-29T11:59:22.687423566Z" level=info msg="StartContainer for \"9a67053ca2ebf76b4f1246fdb3e00285ed60c36e58326c5390089bdba9cb1bd4\"" Jan 29 11:59:22.689643 containerd[1439]: time="2025-01-29T11:59:22.689605905Z" level=info msg="CreateContainer within sandbox \"5f1f7b55bde0a7879ae38a3e986ebc9a9440788afd08e25c6c8831f3e4ff043e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0d869260edee9949d343db5bf2b9f8b00fc628cfdda14c93589e9fbebfe55ba8\"" Jan 29 11:59:22.690353 containerd[1439]: time="2025-01-29T11:59:22.690084759Z" level=info msg="StartContainer for \"0d869260edee9949d343db5bf2b9f8b00fc628cfdda14c93589e9fbebfe55ba8\"" Jan 29 11:59:22.691482 containerd[1439]: time="2025-01-29T11:59:22.691400708Z" level=info msg="CreateContainer within sandbox \"dc3bc1171bb421d96e3cf48d43f562e7311e0bf8f4ab5df5e85c4ebddd28f6fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a8ec44c449f040e786157666aeab39358c730fc88c7d6a21c32dedc9f994c9f3\"" Jan 29 11:59:22.691768 containerd[1439]: time="2025-01-29T11:59:22.691751436Z" level=info msg="StartContainer for \"a8ec44c449f040e786157666aeab39358c730fc88c7d6a21c32dedc9f994c9f3\"" Jan 29 11:59:22.712810 kubelet[2103]: W0129 11:59:22.712710 2103 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 29 11:59:22.712810 kubelet[2103]: E0129 11:59:22.712775 2103 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:59:22.714701 systemd[1]: Started cri-containerd-0d869260edee9949d343db5bf2b9f8b00fc628cfdda14c93589e9fbebfe55ba8.scope - libcontainer container 0d869260edee9949d343db5bf2b9f8b00fc628cfdda14c93589e9fbebfe55ba8. Jan 29 11:59:22.715758 systemd[1]: Started cri-containerd-9a67053ca2ebf76b4f1246fdb3e00285ed60c36e58326c5390089bdba9cb1bd4.scope - libcontainer container 9a67053ca2ebf76b4f1246fdb3e00285ed60c36e58326c5390089bdba9cb1bd4. Jan 29 11:59:22.716596 systemd[1]: Started cri-containerd-a8ec44c449f040e786157666aeab39358c730fc88c7d6a21c32dedc9f994c9f3.scope - libcontainer container a8ec44c449f040e786157666aeab39358c730fc88c7d6a21c32dedc9f994c9f3. Jan 29 11:59:22.756189 containerd[1439]: time="2025-01-29T11:59:22.754273132Z" level=info msg="StartContainer for \"0d869260edee9949d343db5bf2b9f8b00fc628cfdda14c93589e9fbebfe55ba8\" returns successfully" Jan 29 11:59:22.756189 containerd[1439]: time="2025-01-29T11:59:22.754367049Z" level=info msg="StartContainer for \"9a67053ca2ebf76b4f1246fdb3e00285ed60c36e58326c5390089bdba9cb1bd4\" returns successfully" Jan 29 11:59:22.768623 containerd[1439]: time="2025-01-29T11:59:22.763303059Z" level=info msg="StartContainer for \"a8ec44c449f040e786157666aeab39358c730fc88c7d6a21c32dedc9f994c9f3\" returns successfully" Jan 29 11:59:22.795862 kubelet[2103]: I0129 11:59:22.793338 2103 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:59:22.804566 kubelet[2103]: E0129 11:59:22.800096 2103 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 29 11:59:23.195114 kubelet[2103]: E0129 11:59:23.195046 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:23.196662 kubelet[2103]: E0129 11:59:23.196641 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:23.198954 kubelet[2103]: E0129 11:59:23.198933 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:24.200718 kubelet[2103]: E0129 11:59:24.200660 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:24.401321 kubelet[2103]: I0129 11:59:24.401284 2103 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:59:24.415499 kubelet[2103]: E0129 11:59:24.415449 2103 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:59:24.459495 kubelet[2103]: E0129 11:59:24.459104 2103 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27ffe25f4138 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:59:21.156337976 +0000 UTC m=+0.756452665,LastTimestamp:2025-01-29 11:59:21.156337976 +0000 UTC m=+0.756452665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:59:24.514031 kubelet[2103]: E0129 11:59:24.513799 2103 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27ffe35cb9c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:59:21.172949444 +0000 UTC m=+0.773064093,LastTimestamp:2025-01-29 11:59:21.172949444 +0000 UTC m=+0.773064093,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:59:24.515974 kubelet[2103]: I0129 11:59:24.515723 2103 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:59:24.567622 kubelet[2103]: E0129 11:59:24.567517 2103 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27ffe4183326 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:59:21.18523575 +0000 UTC m=+0.785350399,LastTimestamp:2025-01-29 11:59:21.18523575 +0000 UTC m=+0.785350399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:59:24.620416 kubelet[2103]: E0129 11:59:24.620282 2103 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27ffe418429a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:59:21.185239706 +0000 UTC m=+0.785354395,LastTimestamp:2025-01-29 11:59:21.185239706 +0000 UTC m=+0.785354395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:59:24.674082 kubelet[2103]: E0129 11:59:24.673809 2103 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27ffe4184c97 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:59:21.185242263 +0000 UTC m=+0.785356952,LastTimestamp:2025-01-29 11:59:21.185242263 +0000 UTC m=+0.785356952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:59:25.151595 kubelet[2103]: I0129 11:59:25.151557 2103 apiserver.go:52] "Watching apiserver" Jan 29 11:59:25.167232 kubelet[2103]: I0129 11:59:25.167158 2103 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:59:25.933210 kubelet[2103]: E0129 11:59:25.933161 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:26.202170 kubelet[2103]: E0129 11:59:26.201795 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:26.355243 systemd[1]: Reloading requested from client PID 2382 ('systemctl') (unit session-7.scope)... Jan 29 11:59:26.355256 systemd[1]: Reloading... Jan 29 11:59:26.421677 zram_generator::config[2424]: No configuration found. Jan 29 11:59:26.502582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:59:26.564754 systemd[1]: Reloading finished in 209 ms. Jan 29 11:59:26.597502 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:26.609430 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:59:26.609682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:26.609730 systemd[1]: kubelet.service: Consumed 1.134s CPU time, 121.8M memory peak, 0B memory swap peak. Jan 29 11:59:26.616800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:26.703488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:26.707293 (kubelet)[2463]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:59:26.746323 kubelet[2463]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:59:26.746323 kubelet[2463]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:59:26.746323 kubelet[2463]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:59:26.746676 kubelet[2463]: I0129 11:59:26.746403 2463 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:59:26.753166 kubelet[2463]: I0129 11:59:26.752809 2463 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:59:26.753166 kubelet[2463]: I0129 11:59:26.752941 2463 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:59:26.753166 kubelet[2463]: I0129 11:59:26.753137 2463 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:59:26.754988 kubelet[2463]: I0129 11:59:26.754966 2463 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:59:26.757261 kubelet[2463]: I0129 11:59:26.757234 2463 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:59:26.759965 kubelet[2463]: E0129 11:59:26.759934 2463 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:59:26.759965 kubelet[2463]: I0129 11:59:26.759967 2463 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:59:26.762805 kubelet[2463]: I0129 11:59:26.762742 2463 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:59:26.762911 kubelet[2463]: I0129 11:59:26.762837 2463 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:59:26.762964 kubelet[2463]: I0129 11:59:26.762923 2463 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:59:26.763100 kubelet[2463]: I0129 11:59:26.762939 2463 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:59:26.763185 kubelet[2463]: I0129 11:59:26.763109 2463 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:59:26.763185 kubelet[2463]: I0129 11:59:26.763118 2463 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:59:26.763185 kubelet[2463]: I0129 11:59:26.763144 2463 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:59:26.763262 kubelet[2463]: I0129 11:59:26.763240 2463 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:59:26.763262 kubelet[2463]: I0129 11:59:26.763251 2463 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:59:26.763578 kubelet[2463]: I0129 11:59:26.763269 2463 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:59:26.763578 kubelet[2463]: I0129 11:59:26.763278 2463 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:59:26.767644 kubelet[2463]: I0129 11:59:26.767493 2463 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:59:26.768122 kubelet[2463]: I0129 11:59:26.768095 2463 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:59:26.768582 kubelet[2463]: I0129 11:59:26.768469 2463 server.go:1269] "Started kubelet" Jan 29 11:59:26.768836 kubelet[2463]: I0129 11:59:26.768798 2463 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:59:26.770157 kubelet[2463]: I0129 11:59:26.769838 2463 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:59:26.770810 kubelet[2463]: I0129 11:59:26.770367 2463 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:59:26.770810 kubelet[2463]: I0129 11:59:26.770690 2463 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:59:26.773859 kubelet[2463]: I0129 11:59:26.773836 2463 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:59:26.774329 kubelet[2463]: I0129 11:59:26.774294 2463 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:59:26.775334 kubelet[2463]: I0129 11:59:26.775315 2463 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:59:26.775437 kubelet[2463]: E0129 11:59:26.775424 2463 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:59:26.776071 kubelet[2463]: I0129 11:59:26.775776 2463 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:59:26.776071 kubelet[2463]: I0129 11:59:26.775909 2463 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:59:26.777890 kubelet[2463]: I0129 11:59:26.776789 2463 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:59:26.777890 kubelet[2463]: I0129 11:59:26.776879 2463 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:59:26.787508 kubelet[2463]: I0129 11:59:26.784818 2463 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:59:26.787894 kubelet[2463]: I0129 11:59:26.787857 2463 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:59:26.790770 kubelet[2463]: I0129 11:59:26.790733 2463 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:59:26.790770 kubelet[2463]: I0129 11:59:26.790758 2463 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:59:26.790770 kubelet[2463]: I0129 11:59:26.790775 2463 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:59:26.790883 kubelet[2463]: E0129 11:59:26.790811 2463 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:59:26.822115 kubelet[2463]: I0129 11:59:26.822075 2463 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:59:26.822115 kubelet[2463]: I0129 11:59:26.822097 2463 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:59:26.822115 kubelet[2463]: I0129 11:59:26.822116 2463 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:59:26.822258 kubelet[2463]: I0129 11:59:26.822243 2463 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:59:26.822282 kubelet[2463]: I0129 11:59:26.822255 2463 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:59:26.822282 kubelet[2463]: I0129 11:59:26.822272 2463 policy_none.go:49] "None policy: Start" Jan 29 11:59:26.823437 kubelet[2463]: I0129 11:59:26.822938 2463 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:59:26.823437 kubelet[2463]: I0129 11:59:26.822963 2463 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:59:26.823437 kubelet[2463]: I0129 11:59:26.823122 2463 state_mem.go:75] "Updated machine memory state" Jan 29 11:59:26.826972 kubelet[2463]: I0129 11:59:26.826949 2463 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:59:26.828051 kubelet[2463]: I0129 11:59:26.827602 2463 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:59:26.828051 kubelet[2463]: I0129 11:59:26.827621 2463 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:59:26.828051 kubelet[2463]: I0129 11:59:26.827954 2463 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:59:26.897379 kubelet[2463]: E0129 11:59:26.897323 2463 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:59:26.932464 kubelet[2463]: I0129 11:59:26.932431 2463 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:59:26.937656 kubelet[2463]: I0129 11:59:26.937631 2463 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:59:26.937721 kubelet[2463]: I0129 11:59:26.937710 2463 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:59:27.076939 kubelet[2463]: I0129 11:59:27.076790 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b380497567ec568b158be0ed43ecf74f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b380497567ec568b158be0ed43ecf74f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:59:27.076939 kubelet[2463]: I0129 11:59:27.076824 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b380497567ec568b158be0ed43ecf74f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b380497567ec568b158be0ed43ecf74f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:59:27.076939 kubelet[2463]: I0129 11:59:27.076846 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:27.076939 kubelet[2463]: I0129 11:59:27.076864 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:27.077221 kubelet[2463]: I0129 11:59:27.077104 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:59:27.077221 kubelet[2463]: I0129 11:59:27.077134 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b380497567ec568b158be0ed43ecf74f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b380497567ec568b158be0ed43ecf74f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:59:27.077221 kubelet[2463]: I0129 11:59:27.077150 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:27.077221 kubelet[2463]: I0129 11:59:27.077166 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:27.077221 kubelet[2463]: I0129 11:59:27.077183 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:59:27.198336 kubelet[2463]: E0129 11:59:27.198219 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:27.198336 kubelet[2463]: E0129 11:59:27.198219 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:27.198336 kubelet[2463]: E0129 11:59:27.198311 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:27.764099 kubelet[2463]: I0129 11:59:27.764050 2463 apiserver.go:52] "Watching apiserver" Jan 29 11:59:27.776614 kubelet[2463]: I0129 11:59:27.776574 2463 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:59:27.807483 kubelet[2463]: E0129 11:59:27.807152 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:27.807483 kubelet[2463]: E0129 11:59:27.807399 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:27.808207 kubelet[2463]: E0129 11:59:27.808053 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:27.827943 kubelet[2463]: I0129 11:59:27.827833 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.827819335 podStartE2EDuration="2.827819335s" podCreationTimestamp="2025-01-29 11:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:59:27.826417008 +0000 UTC m=+1.114386317" watchObservedRunningTime="2025-01-29 11:59:27.827819335 +0000 UTC m=+1.115788644" Jan 29 11:59:27.835208 kubelet[2463]: I0129 11:59:27.834957 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.834945107 podStartE2EDuration="1.834945107s" podCreationTimestamp="2025-01-29 11:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:59:27.834793849 +0000 UTC m=+1.122763158" watchObservedRunningTime="2025-01-29 11:59:27.834945107 +0000 UTC m=+1.122914376" Jan 29 11:59:27.864580 kubelet[2463]: I0129 11:59:27.863595 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.86357689 podStartE2EDuration="1.86357689s" podCreationTimestamp="2025-01-29 11:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:59:27.845186412 +0000 UTC m=+1.133155721" watchObservedRunningTime="2025-01-29 11:59:27.86357689 +0000 UTC m=+1.151546199" Jan 29 11:59:28.810106 kubelet[2463]: E0129 11:59:28.810069 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:29.812220 kubelet[2463]: E0129 11:59:29.812150 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:30.052492 kubelet[2463]: E0129 11:59:30.052443 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:30.155917 kubelet[2463]: E0129 11:59:30.155868 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:31.390635 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:31.392529 sshd[1614]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:31.395129 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:36122.service: Deactivated successfully. Jan 29 11:59:31.396642 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:59:31.396828 systemd[1]: session-7.scope: Consumed 7.180s CPU time, 155.8M memory peak, 0B memory swap peak. Jan 29 11:59:31.397953 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:59:31.399279 systemd-logind[1420]: Removed session 7. Jan 29 11:59:32.789001 kubelet[2463]: I0129 11:59:32.788970 2463 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:59:32.789683 containerd[1439]: time="2025-01-29T11:59:32.789650035Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:59:32.789970 kubelet[2463]: I0129 11:59:32.789835 2463 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:59:33.765568 systemd[1]: Created slice kubepods-besteffort-poda4d049df_1fd5_4537_8a64_fa5233069a55.slice - libcontainer container kubepods-besteffort-poda4d049df_1fd5_4537_8a64_fa5233069a55.slice. Jan 29 11:59:33.825757 kubelet[2463]: I0129 11:59:33.825709 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4d049df-1fd5-4537-8a64-fa5233069a55-lib-modules\") pod \"kube-proxy-ggqqd\" (UID: \"a4d049df-1fd5-4537-8a64-fa5233069a55\") " pod="kube-system/kube-proxy-ggqqd" Jan 29 11:59:33.825757 kubelet[2463]: I0129 11:59:33.825749 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4d049df-1fd5-4537-8a64-fa5233069a55-kube-proxy\") pod \"kube-proxy-ggqqd\" (UID: \"a4d049df-1fd5-4537-8a64-fa5233069a55\") " pod="kube-system/kube-proxy-ggqqd" Jan 29 11:59:33.825757 kubelet[2463]: I0129 11:59:33.825768 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4d049df-1fd5-4537-8a64-fa5233069a55-xtables-lock\") pod \"kube-proxy-ggqqd\" (UID: \"a4d049df-1fd5-4537-8a64-fa5233069a55\") " pod="kube-system/kube-proxy-ggqqd" Jan 29 11:59:33.826201 kubelet[2463]: I0129 11:59:33.825783 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wsj\" (UniqueName: \"kubernetes.io/projected/a4d049df-1fd5-4537-8a64-fa5233069a55-kube-api-access-r7wsj\") pod \"kube-proxy-ggqqd\" (UID: \"a4d049df-1fd5-4537-8a64-fa5233069a55\") " pod="kube-system/kube-proxy-ggqqd" Jan 29 11:59:34.078655 kubelet[2463]: E0129 11:59:34.078327 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:34.079528 systemd[1]: Created slice kubepods-besteffort-pod867656f2_ee30_4ee8_a757_b30c15b1d8de.slice - libcontainer container kubepods-besteffort-pod867656f2_ee30_4ee8_a757_b30c15b1d8de.slice. Jan 29 11:59:34.081110 containerd[1439]: time="2025-01-29T11:59:34.080863846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggqqd,Uid:a4d049df-1fd5-4537-8a64-fa5233069a55,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:34.098301 containerd[1439]: time="2025-01-29T11:59:34.098220610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:34.098301 containerd[1439]: time="2025-01-29T11:59:34.098274535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:34.098301 containerd[1439]: time="2025-01-29T11:59:34.098289856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.098537 containerd[1439]: time="2025-01-29T11:59:34.098374703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.117708 systemd[1]: Started cri-containerd-6a86e0f5d558ba9f427619a355c36f4216d4102bd8f4b1a48194ff520d22a204.scope - libcontainer container 6a86e0f5d558ba9f427619a355c36f4216d4102bd8f4b1a48194ff520d22a204. Jan 29 11:59:34.128023 kubelet[2463]: I0129 11:59:34.127924 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/867656f2-ee30-4ee8-a757-b30c15b1d8de-var-lib-calico\") pod \"tigera-operator-76c4976dd7-jvqxt\" (UID: \"867656f2-ee30-4ee8-a757-b30c15b1d8de\") " pod="tigera-operator/tigera-operator-76c4976dd7-jvqxt" Jan 29 11:59:34.128023 kubelet[2463]: I0129 11:59:34.127973 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjs99\" (UniqueName: \"kubernetes.io/projected/867656f2-ee30-4ee8-a757-b30c15b1d8de-kube-api-access-mjs99\") pod \"tigera-operator-76c4976dd7-jvqxt\" (UID: \"867656f2-ee30-4ee8-a757-b30c15b1d8de\") " pod="tigera-operator/tigera-operator-76c4976dd7-jvqxt" Jan 29 11:59:34.136450 containerd[1439]: time="2025-01-29T11:59:34.136407700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggqqd,Uid:a4d049df-1fd5-4537-8a64-fa5233069a55,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a86e0f5d558ba9f427619a355c36f4216d4102bd8f4b1a48194ff520d22a204\"" Jan 29 11:59:34.137162 kubelet[2463]: E0129 11:59:34.137120 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:34.139328 containerd[1439]: time="2025-01-29T11:59:34.139294214Z" level=info msg="CreateContainer within sandbox \"6a86e0f5d558ba9f427619a355c36f4216d4102bd8f4b1a48194ff520d22a204\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:59:34.152789 containerd[1439]: time="2025-01-29T11:59:34.152698299Z" level=info msg="CreateContainer within sandbox \"6a86e0f5d558ba9f427619a355c36f4216d4102bd8f4b1a48194ff520d22a204\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e29859fdda11c1408b7053c89ba24500ed64e47934c57d02596f65eb97a5415\"" Jan 29 11:59:34.153371 containerd[1439]: time="2025-01-29T11:59:34.153171817Z" level=info msg="StartContainer for \"7e29859fdda11c1408b7053c89ba24500ed64e47934c57d02596f65eb97a5415\"" Jan 29 11:59:34.175706 systemd[1]: Started cri-containerd-7e29859fdda11c1408b7053c89ba24500ed64e47934c57d02596f65eb97a5415.scope - libcontainer container 7e29859fdda11c1408b7053c89ba24500ed64e47934c57d02596f65eb97a5415. Jan 29 11:59:34.197407 containerd[1439]: time="2025-01-29T11:59:34.197373194Z" level=info msg="StartContainer for \"7e29859fdda11c1408b7053c89ba24500ed64e47934c57d02596f65eb97a5415\" returns successfully" Jan 29 11:59:34.382993 containerd[1439]: time="2025-01-29T11:59:34.382877845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-jvqxt,Uid:867656f2-ee30-4ee8-a757-b30c15b1d8de,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:59:34.405033 containerd[1439]: time="2025-01-29T11:59:34.404927990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:34.405033 containerd[1439]: time="2025-01-29T11:59:34.404988114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:34.405033 containerd[1439]: time="2025-01-29T11:59:34.404999675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.405287 containerd[1439]: time="2025-01-29T11:59:34.405079762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.428712 systemd[1]: Started cri-containerd-6ce0b94b63921bf83484497a49d67f3816f675ce5df397d8779bab3fb925e69e.scope - libcontainer container 6ce0b94b63921bf83484497a49d67f3816f675ce5df397d8779bab3fb925e69e. Jan 29 11:59:34.452137 containerd[1439]: time="2025-01-29T11:59:34.452097047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-jvqxt,Uid:867656f2-ee30-4ee8-a757-b30c15b1d8de,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ce0b94b63921bf83484497a49d67f3816f675ce5df397d8779bab3fb925e69e\"" Jan 29 11:59:34.453877 containerd[1439]: time="2025-01-29T11:59:34.453851109Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:59:34.821613 kubelet[2463]: E0129 11:59:34.821471 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:34.832046 kubelet[2463]: I0129 11:59:34.831978 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggqqd" podStartSLOduration=1.831964466 podStartE2EDuration="1.831964466s" podCreationTimestamp="2025-01-29 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:59:34.831371298 +0000 UTC m=+8.119340607" watchObservedRunningTime="2025-01-29 11:59:34.831964466 +0000 UTC m=+8.119933775" Jan 29 11:59:35.748279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988814416.mount: Deactivated successfully. Jan 29 11:59:35.968887 containerd[1439]: time="2025-01-29T11:59:35.968527832Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:35.969846 containerd[1439]: time="2025-01-29T11:59:35.969809811Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 11:59:35.970591 containerd[1439]: time="2025-01-29T11:59:35.970528506Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:35.972733 containerd[1439]: time="2025-01-29T11:59:35.972669950Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:35.973488 containerd[1439]: time="2025-01-29T11:59:35.973460091Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.519573619s" Jan 29 11:59:35.973488 containerd[1439]: time="2025-01-29T11:59:35.973528096Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 11:59:35.978706 containerd[1439]: time="2025-01-29T11:59:35.978678171Z" level=info msg="CreateContainer within sandbox \"6ce0b94b63921bf83484497a49d67f3816f675ce5df397d8779bab3fb925e69e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:59:35.988300 containerd[1439]: time="2025-01-29T11:59:35.988225743Z" level=info msg="CreateContainer within sandbox \"6ce0b94b63921bf83484497a49d67f3816f675ce5df397d8779bab3fb925e69e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b224d7ebf2d6067ef0ab169855858584281fd92bcb5a7ce79db772173ce4b0d3\"" Jan 29 11:59:35.988653 containerd[1439]: time="2025-01-29T11:59:35.988599972Z" level=info msg="StartContainer for \"b224d7ebf2d6067ef0ab169855858584281fd92bcb5a7ce79db772173ce4b0d3\"" Jan 29 11:59:36.020717 systemd[1]: Started cri-containerd-b224d7ebf2d6067ef0ab169855858584281fd92bcb5a7ce79db772173ce4b0d3.scope - libcontainer container b224d7ebf2d6067ef0ab169855858584281fd92bcb5a7ce79db772173ce4b0d3. Jan 29 11:59:36.048185 containerd[1439]: time="2025-01-29T11:59:36.046409268Z" level=info msg="StartContainer for \"b224d7ebf2d6067ef0ab169855858584281fd92bcb5a7ce79db772173ce4b0d3\" returns successfully" Jan 29 11:59:39.268683 kubelet[2463]: E0129 11:59:39.268646 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:39.279746 kubelet[2463]: I0129 11:59:39.279688 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-jvqxt" podStartSLOduration=3.755768722 podStartE2EDuration="5.279662555s" podCreationTimestamp="2025-01-29 11:59:34 +0000 UTC" firstStartedPulling="2025-01-29 11:59:34.453328026 +0000 UTC m=+7.741297335" lastFinishedPulling="2025-01-29 11:59:35.977221859 +0000 UTC m=+9.265191168" observedRunningTime="2025-01-29 11:59:36.83481417 +0000 UTC m=+10.122783519" watchObservedRunningTime="2025-01-29 11:59:39.279662555 +0000 UTC m=+12.567631864" Jan 29 11:59:39.830258 kubelet[2463]: E0129 11:59:39.830172 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:40.063009 kubelet[2463]: E0129 11:59:40.062103 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:40.114690 update_engine[1427]: I20250129 11:59:40.114202 1427 update_attempter.cc:509] Updating boot flags... Jan 29 11:59:40.155787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2859) Jan 29 11:59:40.174896 kubelet[2463]: E0129 11:59:40.174641 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:40.223013 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2861) Jan 29 11:59:40.287469 systemd[1]: Created slice kubepods-besteffort-podafb8489e_728e_4846_bbac_a9a60ea63ce4.slice - libcontainer container kubepods-besteffort-podafb8489e_728e_4846_bbac_a9a60ea63ce4.slice. Jan 29 11:59:40.338822 systemd[1]: Created slice kubepods-besteffort-pod0a18d7af_e059_4392_b34d_01b16c571209.slice - libcontainer container kubepods-besteffort-pod0a18d7af_e059_4392_b34d_01b16c571209.slice. Jan 29 11:59:40.367178 kubelet[2463]: I0129 11:59:40.366770 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-bin-dir\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367178 kubelet[2463]: I0129 11:59:40.366811 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-log-dir\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367178 kubelet[2463]: I0129 11:59:40.366832 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-policysync\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367178 kubelet[2463]: I0129 11:59:40.366850 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-xtables-lock\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367178 kubelet[2463]: I0129 11:59:40.366864 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a18d7af-e059-4392-b34d-01b16c571209-tigera-ca-bundle\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367621 kubelet[2463]: I0129 11:59:40.366882 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-run-calico\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367621 kubelet[2463]: I0129 11:59:40.366898 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-net-dir\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367621 kubelet[2463]: I0129 11:59:40.366913 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n6n2\" (UniqueName: \"kubernetes.io/projected/0a18d7af-e059-4392-b34d-01b16c571209-kube-api-access-2n6n2\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367621 kubelet[2463]: I0129 11:59:40.366934 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afb8489e-728e-4846-bbac-a9a60ea63ce4-typha-certs\") pod \"calico-typha-5477c76fd7-vbf96\" (UID: \"afb8489e-728e-4846-bbac-a9a60ea63ce4\") " pod="calico-system/calico-typha-5477c76fd7-vbf96" Jan 29 11:59:40.367621 kubelet[2463]: I0129 11:59:40.366949 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-lib-modules\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367729 kubelet[2463]: I0129 11:59:40.366973 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-flexvol-driver-host\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367729 kubelet[2463]: I0129 11:59:40.366988 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplng\" (UniqueName: \"kubernetes.io/projected/afb8489e-728e-4846-bbac-a9a60ea63ce4-kube-api-access-xplng\") pod \"calico-typha-5477c76fd7-vbf96\" (UID: \"afb8489e-728e-4846-bbac-a9a60ea63ce4\") " pod="calico-system/calico-typha-5477c76fd7-vbf96" Jan 29 11:59:40.367729 kubelet[2463]: I0129 11:59:40.367004 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a18d7af-e059-4392-b34d-01b16c571209-node-certs\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367729 kubelet[2463]: I0129 11:59:40.367018 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-lib-calico\") pod \"calico-node-k4d7q\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " pod="calico-system/calico-node-k4d7q" Jan 29 11:59:40.367729 kubelet[2463]: I0129 11:59:40.367032 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afb8489e-728e-4846-bbac-a9a60ea63ce4-tigera-ca-bundle\") pod \"calico-typha-5477c76fd7-vbf96\" (UID: \"afb8489e-728e-4846-bbac-a9a60ea63ce4\") " pod="calico-system/calico-typha-5477c76fd7-vbf96" Jan 29 11:59:40.439472 kubelet[2463]: E0129 11:59:40.439409 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99dlc" podUID="9a967c1a-dfaf-44db-9a3a-468c81bc933d" Jan 29 11:59:40.467818 kubelet[2463]: I0129 11:59:40.467659 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9a967c1a-dfaf-44db-9a3a-468c81bc933d-varrun\") pod \"csi-node-driver-99dlc\" (UID: \"9a967c1a-dfaf-44db-9a3a-468c81bc933d\") " pod="calico-system/csi-node-driver-99dlc" Jan 29 11:59:40.467818 kubelet[2463]: I0129 11:59:40.467703 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a967c1a-dfaf-44db-9a3a-468c81bc933d-kubelet-dir\") pod \"csi-node-driver-99dlc\" (UID: \"9a967c1a-dfaf-44db-9a3a-468c81bc933d\") " pod="calico-system/csi-node-driver-99dlc" Jan 29 11:59:40.470750 kubelet[2463]: I0129 11:59:40.468174 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a967c1a-dfaf-44db-9a3a-468c81bc933d-registration-dir\") pod \"csi-node-driver-99dlc\" (UID: \"9a967c1a-dfaf-44db-9a3a-468c81bc933d\") " pod="calico-system/csi-node-driver-99dlc" Jan 29 11:59:40.470750 kubelet[2463]: I0129 11:59:40.470688 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrp6\" (UniqueName: \"kubernetes.io/projected/9a967c1a-dfaf-44db-9a3a-468c81bc933d-kube-api-access-vfrp6\") pod \"csi-node-driver-99dlc\" (UID: \"9a967c1a-dfaf-44db-9a3a-468c81bc933d\") " pod="calico-system/csi-node-driver-99dlc" Jan 29 11:59:40.471172 kubelet[2463]: I0129 11:59:40.470730 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a967c1a-dfaf-44db-9a3a-468c81bc933d-socket-dir\") pod \"csi-node-driver-99dlc\" (UID: \"9a967c1a-dfaf-44db-9a3a-468c81bc933d\") " pod="calico-system/csi-node-driver-99dlc" Jan 29 11:59:40.492583 kubelet[2463]: E0129 11:59:40.490201 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.492583 kubelet[2463]: W0129 11:59:40.490229 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.492583 kubelet[2463]: E0129 11:59:40.490251 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.492583 kubelet[2463]: E0129 11:59:40.491712 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.492583 kubelet[2463]: W0129 11:59:40.491725 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.492583 kubelet[2463]: E0129 11:59:40.491737 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.493320 kubelet[2463]: E0129 11:59:40.493296 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.493320 kubelet[2463]: W0129 11:59:40.493315 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.493409 kubelet[2463]: E0129 11:59:40.493330 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.494336 kubelet[2463]: E0129 11:59:40.494305 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.494336 kubelet[2463]: W0129 11:59:40.494322 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.494336 kubelet[2463]: E0129 11:59:40.494335 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.571593 kubelet[2463]: E0129 11:59:40.571544 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.571593 kubelet[2463]: W0129 11:59:40.571580 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.571593 kubelet[2463]: E0129 11:59:40.571600 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.573586 kubelet[2463]: E0129 11:59:40.573520 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.573586 kubelet[2463]: W0129 11:59:40.573545 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.573586 kubelet[2463]: E0129 11:59:40.573579 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.576750 kubelet[2463]: E0129 11:59:40.576562 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.576750 kubelet[2463]: W0129 11:59:40.576580 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.576750 kubelet[2463]: E0129 11:59:40.576640 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.576972 kubelet[2463]: E0129 11:59:40.576954 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.576972 kubelet[2463]: W0129 11:59:40.576970 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.577117 kubelet[2463]: E0129 11:59:40.577033 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.577295 kubelet[2463]: E0129 11:59:40.577277 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.577295 kubelet[2463]: W0129 11:59:40.577291 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.577425 kubelet[2463]: E0129 11:59:40.577354 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.577697 kubelet[2463]: E0129 11:59:40.577678 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.577697 kubelet[2463]: W0129 11:59:40.577693 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.577778 kubelet[2463]: E0129 11:59:40.577714 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.577982 kubelet[2463]: E0129 11:59:40.577965 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.577982 kubelet[2463]: W0129 11:59:40.577979 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.578089 kubelet[2463]: E0129 11:59:40.578062 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.578247 kubelet[2463]: E0129 11:59:40.578232 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.578247 kubelet[2463]: W0129 11:59:40.578244 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.578403 kubelet[2463]: E0129 11:59:40.578327 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.578579 kubelet[2463]: E0129 11:59:40.578559 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.578579 kubelet[2463]: W0129 11:59:40.578574 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.578697 kubelet[2463]: E0129 11:59:40.578638 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.578870 kubelet[2463]: E0129 11:59:40.578852 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.578870 kubelet[2463]: W0129 11:59:40.578871 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.579000 kubelet[2463]: E0129 11:59:40.578927 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.581989 kubelet[2463]: E0129 11:59:40.581964 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.581989 kubelet[2463]: W0129 11:59:40.581983 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.582233 kubelet[2463]: E0129 11:59:40.582007 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.582289 kubelet[2463]: E0129 11:59:40.582249 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.582289 kubelet[2463]: W0129 11:59:40.582272 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.582391 kubelet[2463]: E0129 11:59:40.582362 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.582704 kubelet[2463]: E0129 11:59:40.582566 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.582704 kubelet[2463]: W0129 11:59:40.582576 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.582704 kubelet[2463]: E0129 11:59:40.582602 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.583203 kubelet[2463]: E0129 11:59:40.583184 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.583203 kubelet[2463]: W0129 11:59:40.583198 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.583329 kubelet[2463]: E0129 11:59:40.583286 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.584616 kubelet[2463]: E0129 11:59:40.583464 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.584616 kubelet[2463]: W0129 11:59:40.583475 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.584616 kubelet[2463]: E0129 11:59:40.583510 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.585368 kubelet[2463]: E0129 11:59:40.585323 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.585368 kubelet[2463]: W0129 11:59:40.585345 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.585605 kubelet[2463]: E0129 11:59:40.585445 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.585605 kubelet[2463]: E0129 11:59:40.585586 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.585605 kubelet[2463]: W0129 11:59:40.585596 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.585747 kubelet[2463]: E0129 11:59:40.585696 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.587282 kubelet[2463]: E0129 11:59:40.587248 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.587282 kubelet[2463]: W0129 11:59:40.587272 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.587464 kubelet[2463]: E0129 11:59:40.587382 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.587654 kubelet[2463]: E0129 11:59:40.587635 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.587654 kubelet[2463]: W0129 11:59:40.587649 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.587807 kubelet[2463]: E0129 11:59:40.587736 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.587993 kubelet[2463]: E0129 11:59:40.587975 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.588047 kubelet[2463]: W0129 11:59:40.587997 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.588813 kubelet[2463]: E0129 11:59:40.588783 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.591193 kubelet[2463]: E0129 11:59:40.590765 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.591193 kubelet[2463]: W0129 11:59:40.590796 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.591193 kubelet[2463]: E0129 11:59:40.590831 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.591193 kubelet[2463]: E0129 11:59:40.590928 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:40.594249 kubelet[2463]: E0129 11:59:40.594210 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.594249 kubelet[2463]: W0129 11:59:40.594238 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.594365 kubelet[2463]: E0129 11:59:40.594301 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.598731 kubelet[2463]: E0129 11:59:40.595698 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.598731 kubelet[2463]: W0129 11:59:40.595715 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.598731 kubelet[2463]: E0129 11:59:40.595863 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.598731 kubelet[2463]: E0129 11:59:40.596215 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.598731 kubelet[2463]: W0129 11:59:40.596226 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.598731 kubelet[2463]: E0129 11:59:40.596236 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.598731 kubelet[2463]: E0129 11:59:40.596528 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.598731 kubelet[2463]: W0129 11:59:40.596540 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.598731 kubelet[2463]: E0129 11:59:40.596561 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.603081 containerd[1439]: time="2025-01-29T11:59:40.603033874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5477c76fd7-vbf96,Uid:afb8489e-728e-4846-bbac-a9a60ea63ce4,Namespace:calico-system,Attempt:0,}" Jan 29 11:59:40.615544 kubelet[2463]: E0129 11:59:40.615511 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:59:40.615754 kubelet[2463]: W0129 11:59:40.615679 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:59:40.615754 kubelet[2463]: E0129 11:59:40.615705 2463 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:59:40.644194 kubelet[2463]: E0129 11:59:40.642430 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:40.646997 containerd[1439]: time="2025-01-29T11:59:40.643018121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k4d7q,Uid:0a18d7af-e059-4392-b34d-01b16c571209,Namespace:calico-system,Attempt:0,}" Jan 29 11:59:40.680375 containerd[1439]: time="2025-01-29T11:59:40.679818340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:40.680375 containerd[1439]: time="2025-01-29T11:59:40.679877184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:40.680375 containerd[1439]: time="2025-01-29T11:59:40.679888064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:40.680375 containerd[1439]: time="2025-01-29T11:59:40.679960629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:40.683208 containerd[1439]: time="2025-01-29T11:59:40.683089334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:40.683208 containerd[1439]: time="2025-01-29T11:59:40.683188660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:40.683338 containerd[1439]: time="2025-01-29T11:59:40.683204901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:40.684786 containerd[1439]: time="2025-01-29T11:59:40.684352569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:40.709743 systemd[1]: Started cri-containerd-dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14.scope - libcontainer container dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14. Jan 29 11:59:40.714940 systemd[1]: Started cri-containerd-05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3.scope - libcontainer container 05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3. Jan 29 11:59:40.740745 containerd[1439]: time="2025-01-29T11:59:40.740704745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k4d7q,Uid:0a18d7af-e059-4392-b34d-01b16c571209,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\"" Jan 29 11:59:40.755869 kubelet[2463]: E0129 11:59:40.755745 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:40.758942 containerd[1439]: time="2025-01-29T11:59:40.758902903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5477c76fd7-vbf96,Uid:afb8489e-728e-4846-bbac-a9a60ea63ce4,Namespace:calico-system,Attempt:0,} returns sandbox id \"05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3\"" Jan 29 11:59:40.759495 kubelet[2463]: E0129 11:59:40.759476 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:40.767941 containerd[1439]: time="2025-01-29T11:59:40.767820151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:59:41.757076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649198957.mount: Deactivated successfully. Jan 29 11:59:41.791672 kubelet[2463]: E0129 11:59:41.791625 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99dlc" podUID="9a967c1a-dfaf-44db-9a3a-468c81bc933d" Jan 29 11:59:41.823477 containerd[1439]: time="2025-01-29T11:59:41.823266799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:41.824100 containerd[1439]: time="2025-01-29T11:59:41.824033522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 29 11:59:41.825099 containerd[1439]: time="2025-01-29T11:59:41.825032498Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:41.831434 containerd[1439]: time="2025-01-29T11:59:41.829740924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:41.831434 containerd[1439]: time="2025-01-29T11:59:41.830405601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.062488205s" Jan 29 11:59:41.831434 containerd[1439]: time="2025-01-29T11:59:41.830439443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 11:59:41.831767 containerd[1439]: time="2025-01-29T11:59:41.831737236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:59:41.835955 containerd[1439]: time="2025-01-29T11:59:41.835901551Z" level=info msg="CreateContainer within sandbox \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:59:41.850506 containerd[1439]: time="2025-01-29T11:59:41.850462851Z" level=info msg="CreateContainer within sandbox \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\"" Jan 29 11:59:41.850919 containerd[1439]: time="2025-01-29T11:59:41.850851633Z" level=info msg="StartContainer for \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\"" Jan 29 11:59:41.887747 systemd[1]: Started cri-containerd-332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064.scope - libcontainer container 332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064. Jan 29 11:59:41.919867 containerd[1439]: time="2025-01-29T11:59:41.919817038Z" level=info msg="StartContainer for \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\" returns successfully" Jan 29 11:59:41.958821 systemd[1]: cri-containerd-332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064.scope: Deactivated successfully. Jan 29 11:59:42.006346 containerd[1439]: time="2025-01-29T11:59:41.996426354Z" level=info msg="shim disconnected" id=332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064 namespace=k8s.io Jan 29 11:59:42.006346 containerd[1439]: time="2025-01-29T11:59:42.006335377Z" level=warning msg="cleaning up after shim disconnected" id=332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064 namespace=k8s.io Jan 29 11:59:42.006346 containerd[1439]: time="2025-01-29T11:59:42.006348938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:59:42.842521 kubelet[2463]: E0129 11:59:42.842476 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:43.259021 containerd[1439]: time="2025-01-29T11:59:43.258898800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:43.260204 containerd[1439]: time="2025-01-29T11:59:43.260175106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 29 11:59:43.261328 containerd[1439]: time="2025-01-29T11:59:43.261282522Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:43.263665 containerd[1439]: time="2025-01-29T11:59:43.263628882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:43.265124 containerd[1439]: time="2025-01-29T11:59:43.265081076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.433310879s" Jan 29 11:59:43.265179 containerd[1439]: time="2025-01-29T11:59:43.265127359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 11:59:43.276570 containerd[1439]: time="2025-01-29T11:59:43.276462858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:59:43.284835 containerd[1439]: time="2025-01-29T11:59:43.284793124Z" level=info msg="CreateContainer within sandbox \"05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:59:43.295594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621214780.mount: Deactivated successfully. Jan 29 11:59:43.309378 containerd[1439]: time="2025-01-29T11:59:43.309217693Z" level=info msg="CreateContainer within sandbox \"05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\"" Jan 29 11:59:43.309922 containerd[1439]: time="2025-01-29T11:59:43.309818243Z" level=info msg="StartContainer for \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\"" Jan 29 11:59:43.342755 systemd[1]: Started cri-containerd-b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda.scope - libcontainer container b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda. Jan 29 11:59:43.394257 containerd[1439]: time="2025-01-29T11:59:43.394210598Z" level=info msg="StartContainer for \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\" returns successfully" Jan 29 11:59:43.793944 kubelet[2463]: E0129 11:59:43.792063 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99dlc" podUID="9a967c1a-dfaf-44db-9a3a-468c81bc933d" Jan 29 11:59:43.843655 kubelet[2463]: E0129 11:59:43.843286 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:43.856020 kubelet[2463]: I0129 11:59:43.855951 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5477c76fd7-vbf96" podStartSLOduration=1.356780771 podStartE2EDuration="3.855932683s" podCreationTimestamp="2025-01-29 11:59:40 +0000 UTC" firstStartedPulling="2025-01-29 11:59:40.766686483 +0000 UTC m=+14.054655792" lastFinishedPulling="2025-01-29 11:59:43.265838395 +0000 UTC m=+16.553807704" observedRunningTime="2025-01-29 11:59:43.855384695 +0000 UTC m=+17.143353964" watchObservedRunningTime="2025-01-29 11:59:43.855932683 +0000 UTC m=+17.143901952" Jan 29 11:59:44.851084 kubelet[2463]: I0129 11:59:44.851012 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:59:44.852316 kubelet[2463]: E0129 11:59:44.851570 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:45.791462 kubelet[2463]: E0129 11:59:45.791400 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99dlc" podUID="9a967c1a-dfaf-44db-9a3a-468c81bc933d" Jan 29 11:59:47.028127 containerd[1439]: time="2025-01-29T11:59:47.028069014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:47.029144 containerd[1439]: time="2025-01-29T11:59:47.029117218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 11:59:47.029837 containerd[1439]: time="2025-01-29T11:59:47.029814848Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:47.031817 containerd[1439]: time="2025-01-29T11:59:47.031743770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:47.032684 containerd[1439]: time="2025-01-29T11:59:47.032644608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.756123107s" Jan 29 11:59:47.032732 containerd[1439]: time="2025-01-29T11:59:47.032682210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 11:59:47.039312 containerd[1439]: time="2025-01-29T11:59:47.039271970Z" level=info msg="CreateContainer within sandbox \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:59:47.051698 containerd[1439]: time="2025-01-29T11:59:47.051647976Z" level=info msg="CreateContainer within sandbox \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\"" Jan 29 11:59:47.052076 containerd[1439]: time="2025-01-29T11:59:47.052049073Z" level=info msg="StartContainer for \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\"" Jan 29 11:59:47.097728 systemd[1]: Started cri-containerd-f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a.scope - libcontainer container f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a. Jan 29 11:59:47.122661 containerd[1439]: time="2025-01-29T11:59:47.122623154Z" level=info msg="StartContainer for \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\" returns successfully" Jan 29 11:59:47.641910 systemd[1]: cri-containerd-f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a.scope: Deactivated successfully. Jan 29 11:59:47.658052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a-rootfs.mount: Deactivated successfully. Jan 29 11:59:47.688757 kubelet[2463]: I0129 11:59:47.688695 2463 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:59:47.751696 systemd[1]: Created slice kubepods-burstable-podd12f38cd_3b79_4618_9f94_7138faae5b37.slice - libcontainer container kubepods-burstable-podd12f38cd_3b79_4618_9f94_7138faae5b37.slice. Jan 29 11:59:47.765623 systemd[1]: Created slice kubepods-besteffort-pod77595a50_0ab0_4356_96db_16172edd087c.slice - libcontainer container kubepods-besteffort-pod77595a50_0ab0_4356_96db_16172edd087c.slice. Jan 29 11:59:47.770240 systemd[1]: Created slice kubepods-besteffort-pod5a466b9c_61da_4d18_9a7f_3570019f9cfb.slice - libcontainer container kubepods-besteffort-pod5a466b9c_61da_4d18_9a7f_3570019f9cfb.slice. Jan 29 11:59:47.776660 systemd[1]: Created slice kubepods-burstable-pod4ff45c42_a0b4_469d_ad79_6fe025edff50.slice - libcontainer container kubepods-burstable-pod4ff45c42_a0b4_469d_ad79_6fe025edff50.slice. Jan 29 11:59:47.782640 systemd[1]: Created slice kubepods-besteffort-pod58dd87da_9a85_4796_b71a_4cb357793754.slice - libcontainer container kubepods-besteffort-pod58dd87da_9a85_4796_b71a_4cb357793754.slice. Jan 29 11:59:47.786365 containerd[1439]: time="2025-01-29T11:59:47.786227806Z" level=info msg="shim disconnected" id=f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a namespace=k8s.io Jan 29 11:59:47.786484 containerd[1439]: time="2025-01-29T11:59:47.786389773Z" level=warning msg="cleaning up after shim disconnected" id=f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a namespace=k8s.io Jan 29 11:59:47.786484 containerd[1439]: time="2025-01-29T11:59:47.786401454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:59:47.798672 systemd[1]: Created slice kubepods-besteffort-pod9a967c1a_dfaf_44db_9a3a_468c81bc933d.slice - libcontainer container kubepods-besteffort-pod9a967c1a_dfaf_44db_9a3a_468c81bc933d.slice. Jan 29 11:59:47.802473 containerd[1439]: time="2025-01-29T11:59:47.802430335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-99dlc,Uid:9a967c1a-dfaf-44db-9a3a-468c81bc933d,Namespace:calico-system,Attempt:0,}" Jan 29 11:59:47.868419 kubelet[2463]: I0129 11:59:47.861264 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a466b9c-61da-4d18-9a7f-3570019f9cfb-tigera-ca-bundle\") pod \"calico-kube-controllers-59fdc4b9d-7lzvk\" (UID: \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\") " pod="calico-system/calico-kube-controllers-59fdc4b9d-7lzvk" Jan 29 11:59:47.868419 kubelet[2463]: I0129 11:59:47.861302 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jsz\" (UniqueName: \"kubernetes.io/projected/4ff45c42-a0b4-469d-ad79-6fe025edff50-kube-api-access-57jsz\") pod \"coredns-6f6b679f8f-hmlcz\" (UID: \"4ff45c42-a0b4-469d-ad79-6fe025edff50\") " pod="kube-system/coredns-6f6b679f8f-hmlcz" Jan 29 11:59:47.868419 kubelet[2463]: I0129 11:59:47.861324 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12f38cd-3b79-4618-9f94-7138faae5b37-config-volume\") pod \"coredns-6f6b679f8f-86qtp\" (UID: \"d12f38cd-3b79-4618-9f94-7138faae5b37\") " pod="kube-system/coredns-6f6b679f8f-86qtp" Jan 29 11:59:47.868419 kubelet[2463]: I0129 11:59:47.861345 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vzc\" (UniqueName: \"kubernetes.io/projected/77595a50-0ab0-4356-96db-16172edd087c-kube-api-access-r5vzc\") pod \"calico-apiserver-5c5b759c4-5wcw9\" (UID: \"77595a50-0ab0-4356-96db-16172edd087c\") " pod="calico-apiserver/calico-apiserver-5c5b759c4-5wcw9" Jan 29 11:59:47.868419 kubelet[2463]: I0129 11:59:47.861373 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58dd87da-9a85-4796-b71a-4cb357793754-calico-apiserver-certs\") pod \"calico-apiserver-5c5b759c4-wj8ct\" (UID: \"58dd87da-9a85-4796-b71a-4cb357793754\") " pod="calico-apiserver/calico-apiserver-5c5b759c4-wj8ct" Jan 29 11:59:47.869259 kubelet[2463]: I0129 11:59:47.861388 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdlj\" (UniqueName: \"kubernetes.io/projected/58dd87da-9a85-4796-b71a-4cb357793754-kube-api-access-dhdlj\") pod \"calico-apiserver-5c5b759c4-wj8ct\" (UID: \"58dd87da-9a85-4796-b71a-4cb357793754\") " pod="calico-apiserver/calico-apiserver-5c5b759c4-wj8ct" Jan 29 11:59:47.869259 kubelet[2463]: I0129 11:59:47.861404 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7hmf\" (UniqueName: \"kubernetes.io/projected/5a466b9c-61da-4d18-9a7f-3570019f9cfb-kube-api-access-x7hmf\") pod \"calico-kube-controllers-59fdc4b9d-7lzvk\" (UID: \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\") " pod="calico-system/calico-kube-controllers-59fdc4b9d-7lzvk" Jan 29 11:59:47.869259 kubelet[2463]: I0129 11:59:47.861420 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ff45c42-a0b4-469d-ad79-6fe025edff50-config-volume\") pod \"coredns-6f6b679f8f-hmlcz\" (UID: \"4ff45c42-a0b4-469d-ad79-6fe025edff50\") " pod="kube-system/coredns-6f6b679f8f-hmlcz" Jan 29 11:59:47.869259 kubelet[2463]: I0129 11:59:47.861439 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77595a50-0ab0-4356-96db-16172edd087c-calico-apiserver-certs\") pod \"calico-apiserver-5c5b759c4-5wcw9\" (UID: \"77595a50-0ab0-4356-96db-16172edd087c\") " pod="calico-apiserver/calico-apiserver-5c5b759c4-5wcw9" Jan 29 11:59:47.869259 kubelet[2463]: I0129 11:59:47.861469 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dtd\" (UniqueName: \"kubernetes.io/projected/d12f38cd-3b79-4618-9f94-7138faae5b37-kube-api-access-v2dtd\") pod \"coredns-6f6b679f8f-86qtp\" (UID: \"d12f38cd-3b79-4618-9f94-7138faae5b37\") " pod="kube-system/coredns-6f6b679f8f-86qtp" Jan 29 11:59:47.869588 kubelet[2463]: E0129 11:59:47.866848 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:47.873578 containerd[1439]: time="2025-01-29T11:59:47.872151659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:59:47.995497 containerd[1439]: time="2025-01-29T11:59:47.995362418Z" level=error msg="Failed to destroy network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:47.995733 containerd[1439]: time="2025-01-29T11:59:47.995680351Z" level=error msg="encountered an error cleaning up failed sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:47.995773 containerd[1439]: time="2025-01-29T11:59:47.995743794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-99dlc,Uid:9a967c1a-dfaf-44db-9a3a-468c81bc933d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:47.995996 kubelet[2463]: E0129 11:59:47.995962 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:47.996048 kubelet[2463]: E0129 11:59:47.996023 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-99dlc" Jan 29 11:59:47.996048 kubelet[2463]: E0129 11:59:47.996042 2463 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-99dlc" Jan 29 11:59:47.996100 kubelet[2463]: E0129 11:59:47.996075 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-99dlc_calico-system(9a967c1a-dfaf-44db-9a3a-468c81bc933d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-99dlc_calico-system(9a967c1a-dfaf-44db-9a3a-468c81bc933d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-99dlc" podUID="9a967c1a-dfaf-44db-9a3a-468c81bc933d" Jan 29 11:59:48.053518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34-shm.mount: Deactivated successfully. Jan 29 11:59:48.059070 kubelet[2463]: E0129 11:59:48.058993 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:48.059943 containerd[1439]: time="2025-01-29T11:59:48.059526240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-86qtp,Uid:d12f38cd-3b79-4618-9f94-7138faae5b37,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:48.069343 containerd[1439]: time="2025-01-29T11:59:48.069318158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-5wcw9,Uid:77595a50-0ab0-4356-96db-16172edd087c,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:59:48.074693 containerd[1439]: time="2025-01-29T11:59:48.074661095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdc4b9d-7lzvk,Uid:5a466b9c-61da-4d18-9a7f-3570019f9cfb,Namespace:calico-system,Attempt:0,}" Jan 29 11:59:48.081629 kubelet[2463]: E0129 11:59:48.081167 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:48.082807 containerd[1439]: time="2025-01-29T11:59:48.082777306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hmlcz,Uid:4ff45c42-a0b4-469d-ad79-6fe025edff50,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:48.086120 containerd[1439]: time="2025-01-29T11:59:48.086088640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-wj8ct,Uid:58dd87da-9a85-4796-b71a-4cb357793754,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:59:48.128313 containerd[1439]: time="2025-01-29T11:59:48.128202754Z" level=error msg="Failed to destroy network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.130319 containerd[1439]: time="2025-01-29T11:59:48.129709255Z" level=error msg="encountered an error cleaning up failed sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.130319 containerd[1439]: time="2025-01-29T11:59:48.129773298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-86qtp,Uid:d12f38cd-3b79-4618-9f94-7138faae5b37,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.130430 kubelet[2463]: E0129 11:59:48.129972 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.130430 kubelet[2463]: E0129 11:59:48.130027 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-86qtp" Jan 29 11:59:48.130430 kubelet[2463]: E0129 11:59:48.130049 2463 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-86qtp" Jan 29 11:59:48.130525 kubelet[2463]: E0129 11:59:48.130103 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-86qtp_kube-system(d12f38cd-3b79-4618-9f94-7138faae5b37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-86qtp_kube-system(d12f38cd-3b79-4618-9f94-7138faae5b37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-86qtp" podUID="d12f38cd-3b79-4618-9f94-7138faae5b37" Jan 29 11:59:48.169638 containerd[1439]: time="2025-01-29T11:59:48.169583718Z" level=error msg="Failed to destroy network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.169957 containerd[1439]: time="2025-01-29T11:59:48.169931732Z" level=error msg="encountered an error cleaning up failed sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.170004 containerd[1439]: time="2025-01-29T11:59:48.169981294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-5wcw9,Uid:77595a50-0ab0-4356-96db-16172edd087c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.171206 kubelet[2463]: E0129 11:59:48.170222 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.171206 kubelet[2463]: E0129 11:59:48.170286 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5b759c4-5wcw9" Jan 29 11:59:48.171206 kubelet[2463]: E0129 11:59:48.170310 2463 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5b759c4-5wcw9" Jan 29 11:59:48.171340 kubelet[2463]: E0129 11:59:48.170352 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5b759c4-5wcw9_calico-apiserver(77595a50-0ab0-4356-96db-16172edd087c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5b759c4-5wcw9_calico-apiserver(77595a50-0ab0-4356-96db-16172edd087c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5b759c4-5wcw9" podUID="77595a50-0ab0-4356-96db-16172edd087c" Jan 29 11:59:48.180719 containerd[1439]: time="2025-01-29T11:59:48.180677369Z" level=error msg="Failed to destroy network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.181012 containerd[1439]: time="2025-01-29T11:59:48.180985262Z" level=error msg="encountered an error cleaning up failed sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.181050 containerd[1439]: time="2025-01-29T11:59:48.181033263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdc4b9d-7lzvk,Uid:5a466b9c-61da-4d18-9a7f-3570019f9cfb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.181256 kubelet[2463]: E0129 11:59:48.181220 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.181315 kubelet[2463]: E0129 11:59:48.181275 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59fdc4b9d-7lzvk" Jan 29 11:59:48.181315 kubelet[2463]: E0129 11:59:48.181299 2463 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59fdc4b9d-7lzvk" Jan 29 11:59:48.181366 kubelet[2463]: E0129 11:59:48.181338 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59fdc4b9d-7lzvk_calico-system(5a466b9c-61da-4d18-9a7f-3570019f9cfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59fdc4b9d-7lzvk_calico-system(5a466b9c-61da-4d18-9a7f-3570019f9cfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59fdc4b9d-7lzvk" podUID="5a466b9c-61da-4d18-9a7f-3570019f9cfb" Jan 29 11:59:48.181585 containerd[1439]: time="2025-01-29T11:59:48.181497002Z" level=error msg="Failed to destroy network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.181901 containerd[1439]: time="2025-01-29T11:59:48.181812335Z" level=error msg="encountered an error cleaning up failed sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.181901 containerd[1439]: time="2025-01-29T11:59:48.181861297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-wj8ct,Uid:58dd87da-9a85-4796-b71a-4cb357793754,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.183037 kubelet[2463]: E0129 11:59:48.182914 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.183037 kubelet[2463]: E0129 11:59:48.182952 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5b759c4-wj8ct" Jan 29 11:59:48.183037 kubelet[2463]: E0129 11:59:48.182968 2463 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5b759c4-wj8ct" Jan 29 11:59:48.183167 kubelet[2463]: E0129 11:59:48.183002 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5b759c4-wj8ct_calico-apiserver(58dd87da-9a85-4796-b71a-4cb357793754)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5b759c4-wj8ct_calico-apiserver(58dd87da-9a85-4796-b71a-4cb357793754)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5b759c4-wj8ct" podUID="58dd87da-9a85-4796-b71a-4cb357793754" Jan 29 11:59:48.187332 containerd[1439]: time="2025-01-29T11:59:48.187294198Z" level=error msg="Failed to destroy network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.187632 containerd[1439]: time="2025-01-29T11:59:48.187599491Z" level=error msg="encountered an error cleaning up failed sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.187680 containerd[1439]: time="2025-01-29T11:59:48.187651853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hmlcz,Uid:4ff45c42-a0b4-469d-ad79-6fe025edff50,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.187938 kubelet[2463]: E0129 11:59:48.187791 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.187938 kubelet[2463]: E0129 11:59:48.187843 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hmlcz" Jan 29 11:59:48.187938 kubelet[2463]: E0129 11:59:48.187867 2463 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hmlcz" Jan 29 11:59:48.188030 kubelet[2463]: E0129 11:59:48.187900 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hmlcz_kube-system(4ff45c42-a0b4-469d-ad79-6fe025edff50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hmlcz_kube-system(4ff45c42-a0b4-469d-ad79-6fe025edff50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hmlcz" podUID="4ff45c42-a0b4-469d-ad79-6fe025edff50" Jan 29 11:59:48.869319 kubelet[2463]: I0129 11:59:48.868903 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 11:59:48.869669 containerd[1439]: time="2025-01-29T11:59:48.869418152Z" level=info msg="StopPodSandbox for \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\"" Jan 29 11:59:48.869669 containerd[1439]: time="2025-01-29T11:59:48.869591559Z" level=info msg="Ensure that sandbox 1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89 in task-service has been cleanup successfully" Jan 29 11:59:48.871348 kubelet[2463]: I0129 11:59:48.871325 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 11:59:48.873295 kubelet[2463]: I0129 11:59:48.873258 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 11:59:48.883204 containerd[1439]: time="2025-01-29T11:59:48.883154111Z" level=info msg="StopPodSandbox for \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\"" Jan 29 11:59:48.883454 containerd[1439]: time="2025-01-29T11:59:48.883410962Z" level=info msg="Ensure that sandbox 6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633 in task-service has been cleanup successfully" Jan 29 11:59:48.885161 containerd[1439]: time="2025-01-29T11:59:48.883653611Z" level=info msg="StopPodSandbox for \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\"" Jan 29 11:59:48.885161 containerd[1439]: time="2025-01-29T11:59:48.883775576Z" level=info msg="Ensure that sandbox 7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34 in task-service has been cleanup successfully" Jan 29 11:59:48.887345 kubelet[2463]: I0129 11:59:48.887314 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 11:59:48.889046 containerd[1439]: time="2025-01-29T11:59:48.889016190Z" level=info msg="StopPodSandbox for \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\"" Jan 29 11:59:48.889240 containerd[1439]: time="2025-01-29T11:59:48.889216438Z" level=info msg="Ensure that sandbox 9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968 in task-service has been cleanup successfully" Jan 29 11:59:48.892369 kubelet[2463]: I0129 11:59:48.890822 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 11:59:48.892429 containerd[1439]: time="2025-01-29T11:59:48.891598575Z" level=info msg="StopPodSandbox for \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\"" Jan 29 11:59:48.892429 containerd[1439]: time="2025-01-29T11:59:48.891737140Z" level=info msg="Ensure that sandbox aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1 in task-service has been cleanup successfully" Jan 29 11:59:48.894120 kubelet[2463]: I0129 11:59:48.894062 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 11:59:48.895193 containerd[1439]: time="2025-01-29T11:59:48.895158680Z" level=info msg="StopPodSandbox for \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\"" Jan 29 11:59:48.895314 containerd[1439]: time="2025-01-29T11:59:48.895290965Z" level=info msg="Ensure that sandbox 9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6 in task-service has been cleanup successfully" Jan 29 11:59:48.908760 containerd[1439]: time="2025-01-29T11:59:48.908531544Z" level=error msg="StopPodSandbox for \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\" failed" error="failed to destroy network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.908979 kubelet[2463]: E0129 11:59:48.908934 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 11:59:48.909051 kubelet[2463]: E0129 11:59:48.909003 2463 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89"} Jan 29 11:59:48.909090 kubelet[2463]: E0129 11:59:48.909067 2463 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:59:48.909217 kubelet[2463]: E0129 11:59:48.909095 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59fdc4b9d-7lzvk" podUID="5a466b9c-61da-4d18-9a7f-3570019f9cfb" Jan 29 11:59:48.921439 containerd[1439]: time="2025-01-29T11:59:48.921384067Z" level=error msg="StopPodSandbox for \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\" failed" error="failed to destroy network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.921732 kubelet[2463]: E0129 11:59:48.921634 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 11:59:48.921732 kubelet[2463]: E0129 11:59:48.921686 2463 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968"} Jan 29 11:59:48.921732 kubelet[2463]: E0129 11:59:48.921723 2463 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ff45c42-a0b4-469d-ad79-6fe025edff50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:59:48.921871 kubelet[2463]: E0129 11:59:48.921744 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ff45c42-a0b4-469d-ad79-6fe025edff50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hmlcz" podUID="4ff45c42-a0b4-469d-ad79-6fe025edff50" Jan 29 11:59:48.930285 containerd[1439]: time="2025-01-29T11:59:48.930121782Z" level=error msg="StopPodSandbox for \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\" failed" error="failed to destroy network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.930683 kubelet[2463]: E0129 11:59:48.930624 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 11:59:48.930756 kubelet[2463]: E0129 11:59:48.930693 2463 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633"} Jan 29 11:59:48.930783 kubelet[2463]: E0129 11:59:48.930751 2463 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58dd87da-9a85-4796-b71a-4cb357793754\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:59:48.930838 kubelet[2463]: E0129 11:59:48.930775 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58dd87da-9a85-4796-b71a-4cb357793754\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5b759c4-wj8ct" podUID="58dd87da-9a85-4796-b71a-4cb357793754" Jan 29 11:59:48.936083 containerd[1439]: time="2025-01-29T11:59:48.935984701Z" level=error msg="StopPodSandbox for \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\" failed" error="failed to destroy network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.936226 kubelet[2463]: E0129 11:59:48.936186 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 11:59:48.936280 kubelet[2463]: E0129 11:59:48.936228 2463 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34"} Jan 29 11:59:48.936280 kubelet[2463]: E0129 11:59:48.936254 2463 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a967c1a-dfaf-44db-9a3a-468c81bc933d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:59:48.936351 kubelet[2463]: E0129 11:59:48.936277 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a967c1a-dfaf-44db-9a3a-468c81bc933d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-99dlc" podUID="9a967c1a-dfaf-44db-9a3a-468c81bc933d" Jan 29 11:59:48.940118 containerd[1439]: time="2025-01-29T11:59:48.940082667Z" level=error msg="StopPodSandbox for \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\" failed" error="failed to destroy network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.940360 kubelet[2463]: E0129 11:59:48.940225 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 11:59:48.940360 kubelet[2463]: E0129 11:59:48.940257 2463 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6"} Jan 29 11:59:48.940360 kubelet[2463]: E0129 11:59:48.940281 2463 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d12f38cd-3b79-4618-9f94-7138faae5b37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:59:48.940360 kubelet[2463]: E0129 11:59:48.940299 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d12f38cd-3b79-4618-9f94-7138faae5b37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-86qtp" podUID="d12f38cd-3b79-4618-9f94-7138faae5b37" Jan 29 11:59:48.944261 containerd[1439]: time="2025-01-29T11:59:48.944189955Z" level=error msg="StopPodSandbox for \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\" failed" error="failed to destroy network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:59:48.944531 kubelet[2463]: E0129 11:59:48.944487 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 11:59:48.944592 kubelet[2463]: E0129 11:59:48.944533 2463 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1"} Jan 29 11:59:48.944592 kubelet[2463]: E0129 11:59:48.944566 2463 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77595a50-0ab0-4356-96db-16172edd087c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:59:48.944674 kubelet[2463]: E0129 11:59:48.944588 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77595a50-0ab0-4356-96db-16172edd087c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5b759c4-5wcw9" podUID="77595a50-0ab0-4356-96db-16172edd087c" Jan 29 11:59:49.048522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1-shm.mount: Deactivated successfully. Jan 29 11:59:49.048623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6-shm.mount: Deactivated successfully. Jan 29 11:59:50.624213 kubelet[2463]: I0129 11:59:50.624174 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:59:50.625323 kubelet[2463]: E0129 11:59:50.624528 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:50.898738 kubelet[2463]: E0129 11:59:50.898650 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:52.086322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081376763.mount: Deactivated successfully. Jan 29 11:59:52.160061 containerd[1439]: time="2025-01-29T11:59:52.160008674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:52.161205 containerd[1439]: time="2025-01-29T11:59:52.161160194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 11:59:52.162235 containerd[1439]: time="2025-01-29T11:59:52.162181189Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:52.163988 containerd[1439]: time="2025-01-29T11:59:52.163944850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:52.164932 containerd[1439]: time="2025-01-29T11:59:52.164425387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.292141081s" Jan 29 11:59:52.164932 containerd[1439]: time="2025-01-29T11:59:52.164458468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 11:59:52.174300 containerd[1439]: time="2025-01-29T11:59:52.174272566Z" level=info msg="CreateContainer within sandbox \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:59:52.198008 containerd[1439]: time="2025-01-29T11:59:52.197963502Z" level=info msg="CreateContainer within sandbox \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\"" Jan 29 11:59:52.198581 containerd[1439]: time="2025-01-29T11:59:52.198450679Z" level=info msg="StartContainer for \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\"" Jan 29 11:59:52.252737 systemd[1]: Started cri-containerd-cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762.scope - libcontainer container cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762. Jan 29 11:59:52.335902 containerd[1439]: time="2025-01-29T11:59:52.335754689Z" level=info msg="StartContainer for \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\" returns successfully" Jan 29 11:59:52.439741 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:59:52.439871 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:59:52.904739 kubelet[2463]: E0129 11:59:52.904699 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:53.889162 kernel: bpftool[3771]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:59:53.906253 kubelet[2463]: E0129 11:59:53.906208 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:54.057925 systemd-networkd[1379]: vxlan.calico: Link UP Jan 29 11:59:54.057935 systemd-networkd[1379]: vxlan.calico: Gained carrier Jan 29 11:59:54.918961 kubelet[2463]: E0129 11:59:54.918929 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:56.073809 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Jan 29 11:59:56.476143 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:39294.service - OpenSSH per-connection server daemon (10.0.0.1:39294). Jan 29 11:59:56.520102 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 39294 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:59:56.521418 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:56.524999 systemd-logind[1420]: New session 8 of user core. Jan 29 11:59:56.534715 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:59:56.685606 sshd[3896]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:56.688047 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:39294.service: Deactivated successfully. Jan 29 11:59:56.691267 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:59:56.692745 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:59:56.693936 systemd-logind[1420]: Removed session 8. Jan 29 11:59:59.792335 containerd[1439]: time="2025-01-29T11:59:59.792229758Z" level=info msg="StopPodSandbox for \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\"" Jan 29 11:59:59.864522 kubelet[2463]: I0129 11:59:59.864463 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k4d7q" podStartSLOduration=8.465271558 podStartE2EDuration="19.86444613s" podCreationTimestamp="2025-01-29 11:59:40 +0000 UTC" firstStartedPulling="2025-01-29 11:59:40.765917918 +0000 UTC m=+14.053887227" lastFinishedPulling="2025-01-29 11:59:52.16509249 +0000 UTC m=+25.453061799" observedRunningTime="2025-01-29 11:59:52.926881735 +0000 UTC m=+26.214851044" watchObservedRunningTime="2025-01-29 11:59:59.86444613 +0000 UTC m=+33.152415439" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.866 [INFO][3935] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.866 [INFO][3935] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" iface="eth0" netns="/var/run/netns/cni-319cb9f7-4f20-0064-b57c-ba013a4af675" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.867 [INFO][3935] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" iface="eth0" netns="/var/run/netns/cni-319cb9f7-4f20-0064-b57c-ba013a4af675" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.868 [INFO][3935] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" iface="eth0" netns="/var/run/netns/cni-319cb9f7-4f20-0064-b57c-ba013a4af675" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.868 [INFO][3935] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.868 [INFO][3935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.957 [INFO][3944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.957 [INFO][3944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.957 [INFO][3944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.966 [WARNING][3944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.966 [INFO][3944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.968 [INFO][3944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:59:59.971604 containerd[1439]: 2025-01-29 11:59:59.970 [INFO][3935] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 11:59:59.972087 containerd[1439]: time="2025-01-29T11:59:59.971750362Z" level=info msg="TearDown network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\" successfully" Jan 29 11:59:59.972087 containerd[1439]: time="2025-01-29T11:59:59.971777482Z" level=info msg="StopPodSandbox for \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\" returns successfully" Jan 29 11:59:59.972759 containerd[1439]: time="2025-01-29T11:59:59.972730588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-wj8ct,Uid:58dd87da-9a85-4796-b71a-4cb357793754,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:59:59.973802 systemd[1]: run-netns-cni\x2d319cb9f7\x2d4f20\x2d0064\x2db57c\x2dba013a4af675.mount: Deactivated successfully. Jan 29 12:00:00.102948 systemd-networkd[1379]: cali795e6eb4b83: Link UP Jan 29 12:00:00.103130 systemd-networkd[1379]: cali795e6eb4b83: Gained carrier Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.029 [INFO][3953] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0 calico-apiserver-5c5b759c4- calico-apiserver 58dd87da-9a85-4796-b71a-4cb357793754 873 0 2025-01-29 11:59:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5b759c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c5b759c4-wj8ct eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali795e6eb4b83 [] []}} ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.030 [INFO][3953] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.055 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" HandleID="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.066 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" HandleID="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c5b759c4-wj8ct", "timestamp":"2025-01-29 12:00:00.055614799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.066 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.066 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.066 [INFO][3966] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.068 [INFO][3966] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.074 [INFO][3966] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.079 [INFO][3966] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.080 [INFO][3966] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.082 [INFO][3966] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.082 [INFO][3966] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.084 [INFO][3966] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0 Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.088 [INFO][3966] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.096 [INFO][3966] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.096 [INFO][3966] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" host="localhost" Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.096 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:00.128585 containerd[1439]: 2025-01-29 12:00:00.096 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" HandleID="k8s-pod-network.000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:00.129278 containerd[1439]: 2025-01-29 12:00:00.098 [INFO][3953] cni-plugin/k8s.go 386: Populated endpoint ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"58dd87da-9a85-4796-b71a-4cb357793754", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c5b759c4-wj8ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795e6eb4b83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:00.129278 containerd[1439]: 2025-01-29 12:00:00.099 [INFO][3953] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:00.129278 containerd[1439]: 2025-01-29 12:00:00.099 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali795e6eb4b83 ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:00.129278 containerd[1439]: 2025-01-29 12:00:00.102 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:00.129278 containerd[1439]: 2025-01-29 12:00:00.104 [INFO][3953] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"58dd87da-9a85-4796-b71a-4cb357793754", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0", Pod:"calico-apiserver-5c5b759c4-wj8ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795e6eb4b83", MAC:"d6:f7:68:02:f5:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:00.129278 containerd[1439]: 2025-01-29 12:00:00.126 [INFO][3953] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-wj8ct" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:00.159739 containerd[1439]: time="2025-01-29T12:00:00.159431450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:00.159739 containerd[1439]: time="2025-01-29T12:00:00.159488011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:00.159739 containerd[1439]: time="2025-01-29T12:00:00.159499372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:00.159739 containerd[1439]: time="2025-01-29T12:00:00.159602454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:00.193711 systemd[1]: Started cri-containerd-000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0.scope - libcontainer container 000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0. Jan 29 12:00:00.202893 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:00:00.219759 containerd[1439]: time="2025-01-29T12:00:00.219722452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-wj8ct,Uid:58dd87da-9a85-4796-b71a-4cb357793754,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0\"" Jan 29 12:00:00.227317 containerd[1439]: time="2025-01-29T12:00:00.227273248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:00:00.973754 systemd[1]: run-containerd-runc-k8s.io-000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0-runc.ifqoI9.mount: Deactivated successfully. Jan 29 12:00:01.578156 systemd-networkd[1379]: cali795e6eb4b83: Gained IPv6LL Jan 29 12:00:01.698139 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:39298.service - OpenSSH per-connection server daemon (10.0.0.1:39298). Jan 29 12:00:01.741062 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 39298 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:01.742312 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:01.745913 systemd-logind[1420]: New session 9 of user core. Jan 29 12:00:01.756693 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:00:01.792625 containerd[1439]: time="2025-01-29T12:00:01.792392345Z" level=info msg="StopPodSandbox for \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\"" Jan 29 12:00:01.792625 containerd[1439]: time="2025-01-29T12:00:01.792422186Z" level=info msg="StopPodSandbox for \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\"" Jan 29 12:00:01.792900 containerd[1439]: time="2025-01-29T12:00:01.792430466Z" level=info msg="StopPodSandbox for \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\"" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.852 [INFO][4080] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.852 [INFO][4080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" iface="eth0" netns="/var/run/netns/cni-655bf5cd-f5ea-cfb4-5fe0-d8340446a867" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.853 [INFO][4080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" iface="eth0" netns="/var/run/netns/cni-655bf5cd-f5ea-cfb4-5fe0-d8340446a867" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.854 [INFO][4080] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" iface="eth0" netns="/var/run/netns/cni-655bf5cd-f5ea-cfb4-5fe0-d8340446a867" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.854 [INFO][4080] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.854 [INFO][4080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.883 [INFO][4109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.883 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.883 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.898 [WARNING][4109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.898 [INFO][4109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.899 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:01.907652 containerd[1439]: 2025-01-29 12:00:01.902 [INFO][4080] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:01.906469 systemd[1]: run-netns-cni\x2d655bf5cd\x2df5ea\x2dcfb4\x2d5fe0\x2dd8340446a867.mount: Deactivated successfully. Jan 29 12:00:01.908839 containerd[1439]: time="2025-01-29T12:00:01.908388059Z" level=info msg="TearDown network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\" successfully" Jan 29 12:00:01.908839 containerd[1439]: time="2025-01-29T12:00:01.908423540Z" level=info msg="StopPodSandbox for \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\" returns successfully" Jan 29 12:00:01.909047 containerd[1439]: time="2025-01-29T12:00:01.909009235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-99dlc,Uid:9a967c1a-dfaf-44db-9a3a-468c81bc933d,Namespace:calico-system,Attempt:1,}" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.858 [INFO][4089] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.858 [INFO][4089] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" iface="eth0" netns="/var/run/netns/cni-14aaae0e-7ff7-c0b1-8661-8f8654cdaacd" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.858 [INFO][4089] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" iface="eth0" netns="/var/run/netns/cni-14aaae0e-7ff7-c0b1-8661-8f8654cdaacd" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.859 [INFO][4089] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" iface="eth0" netns="/var/run/netns/cni-14aaae0e-7ff7-c0b1-8661-8f8654cdaacd" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.859 [INFO][4089] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.859 [INFO][4089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.901 [INFO][4114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.901 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.901 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.914 [WARNING][4114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.914 [INFO][4114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.917 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:01.921583 containerd[1439]: 2025-01-29 12:00:01.919 [INFO][4089] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:01.923642 containerd[1439]: time="2025-01-29T12:00:01.923188631Z" level=info msg="TearDown network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\" successfully" Jan 29 12:00:01.923642 containerd[1439]: time="2025-01-29T12:00:01.923217032Z" level=info msg="StopPodSandbox for \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\" returns successfully" Jan 29 12:00:01.924006 kubelet[2463]: E0129 12:00:01.923513 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:01.924685 containerd[1439]: time="2025-01-29T12:00:01.924061333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hmlcz,Uid:4ff45c42-a0b4-469d-ad79-6fe025edff50,Namespace:kube-system,Attempt:1,}" Jan 29 12:00:01.926999 systemd[1]: run-netns-cni\x2d14aaae0e\x2d7ff7\x2dc0b1\x2d8661\x2d8f8654cdaacd.mount: Deactivated successfully. Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.878 [INFO][4075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.878 [INFO][4075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" iface="eth0" netns="/var/run/netns/cni-752ee91d-7e6e-d6df-a5f4-1b31dd9c3f96" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.879 [INFO][4075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" iface="eth0" netns="/var/run/netns/cni-752ee91d-7e6e-d6df-a5f4-1b31dd9c3f96" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.880 [INFO][4075] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" iface="eth0" netns="/var/run/netns/cni-752ee91d-7e6e-d6df-a5f4-1b31dd9c3f96" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.880 [INFO][4075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.880 [INFO][4075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.910 [INFO][4121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.910 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.917 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.926 [WARNING][4121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.926 [INFO][4121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.928 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:01.937641 containerd[1439]: 2025-01-29 12:00:01.930 [INFO][4075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:01.938120 containerd[1439]: time="2025-01-29T12:00:01.938029284Z" level=info msg="TearDown network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\" successfully" Jan 29 12:00:01.938120 containerd[1439]: time="2025-01-29T12:00:01.938056965Z" level=info msg="StopPodSandbox for \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\" returns successfully" Jan 29 12:00:01.938526 kubelet[2463]: E0129 12:00:01.938298 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:01.939956 containerd[1439]: time="2025-01-29T12:00:01.939706246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-86qtp,Uid:d12f38cd-3b79-4618-9f94-7138faae5b37,Namespace:kube-system,Attempt:1,}" Jan 29 12:00:01.976630 systemd[1]: run-netns-cni\x2d752ee91d\x2d7e6e\x2dd6df\x2da5f4\x2d1b31dd9c3f96.mount: Deactivated successfully. Jan 29 12:00:02.019769 sshd[4028]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:02.023327 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:39298.service: Deactivated successfully. Jan 29 12:00:02.025092 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:00:02.030242 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:00:02.031762 systemd-logind[1420]: Removed session 9. Jan 29 12:00:02.091400 systemd-networkd[1379]: cali50ed6c54d90: Link UP Jan 29 12:00:02.091780 systemd-networkd[1379]: cali50ed6c54d90: Gained carrier Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.001 [INFO][4139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--99dlc-eth0 csi-node-driver- calico-system 9a967c1a-dfaf-44db-9a3a-468c81bc933d 892 0 2025-01-29 11:59:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-99dlc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali50ed6c54d90 [] []}} ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.002 [INFO][4139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.048 [INFO][4180] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" HandleID="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.062 [INFO][4180] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" HandleID="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003d9030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-99dlc", "timestamp":"2025-01-29 12:00:02.048969556 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.062 [INFO][4180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.063 [INFO][4180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.063 [INFO][4180] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.065 [INFO][4180] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.070 [INFO][4180] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.075 [INFO][4180] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.076 [INFO][4180] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.078 [INFO][4180] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.078 [INFO][4180] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.079 [INFO][4180] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499 Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.082 [INFO][4180] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.087 [INFO][4180] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.087 [INFO][4180] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" host="localhost" Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.087 [INFO][4180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:02.105332 containerd[1439]: 2025-01-29 12:00:02.087 [INFO][4180] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" HandleID="k8s-pod-network.e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:02.105948 containerd[1439]: 2025-01-29 12:00:02.089 [INFO][4139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--99dlc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a967c1a-dfaf-44db-9a3a-468c81bc933d", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-99dlc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50ed6c54d90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:02.105948 containerd[1439]: 2025-01-29 12:00:02.089 [INFO][4139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:02.105948 containerd[1439]: 2025-01-29 12:00:02.089 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50ed6c54d90 ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:02.105948 containerd[1439]: 2025-01-29 12:00:02.092 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:02.105948 containerd[1439]: 2025-01-29 12:00:02.092 [INFO][4139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--99dlc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a967c1a-dfaf-44db-9a3a-468c81bc933d", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499", Pod:"csi-node-driver-99dlc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50ed6c54d90", MAC:"d2:23:12:ac:2f:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:02.105948 containerd[1439]: 2025-01-29 12:00:02.103 [INFO][4139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499" Namespace="calico-system" Pod="csi-node-driver-99dlc" WorkloadEndpoint="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:02.121969 containerd[1439]: time="2025-01-29T12:00:02.121379602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:02.121969 containerd[1439]: time="2025-01-29T12:00:02.121772491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:02.121969 containerd[1439]: time="2025-01-29T12:00:02.121794612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:02.121969 containerd[1439]: time="2025-01-29T12:00:02.121931695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:02.133601 systemd[1]: run-containerd-runc-k8s.io-e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499-runc.sWphuY.mount: Deactivated successfully. Jan 29 12:00:02.150786 systemd[1]: Started cri-containerd-e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499.scope - libcontainer container e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499. Jan 29 12:00:02.158862 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:00:02.170477 containerd[1439]: time="2025-01-29T12:00:02.170442438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-99dlc,Uid:9a967c1a-dfaf-44db-9a3a-468c81bc933d,Namespace:calico-system,Attempt:1,} returns sandbox id \"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499\"" Jan 29 12:00:02.194896 systemd-networkd[1379]: cali35220b040c6: Link UP Jan 29 12:00:02.195499 systemd-networkd[1379]: cali35220b040c6: Gained carrier Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.004 [INFO][4145] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0 coredns-6f6b679f8f- kube-system 4ff45c42-a0b4-469d-ad79-6fe025edff50 893 0 2025-01-29 11:59:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hmlcz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35220b040c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.005 [INFO][4145] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.049 [INFO][4175] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" HandleID="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.067 [INFO][4175] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" HandleID="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000279a90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hmlcz", "timestamp":"2025-01-29 12:00:02.049862618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.067 [INFO][4175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.087 [INFO][4175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.087 [INFO][4175] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.166 [INFO][4175] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.171 [INFO][4175] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.175 [INFO][4175] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.176 [INFO][4175] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.178 [INFO][4175] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.178 [INFO][4175] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.180 [INFO][4175] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3 Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.183 [INFO][4175] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.189 [INFO][4175] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.189 [INFO][4175] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" host="localhost" Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.189 [INFO][4175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:02.205908 containerd[1439]: 2025-01-29 12:00:02.189 [INFO][4175] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" HandleID="k8s-pod-network.99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:02.206792 containerd[1439]: 2025-01-29 12:00:02.192 [INFO][4145] cni-plugin/k8s.go 386: Populated endpoint ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4ff45c42-a0b4-469d-ad79-6fe025edff50", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hmlcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35220b040c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:02.206792 containerd[1439]: 2025-01-29 12:00:02.192 [INFO][4145] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:02.206792 containerd[1439]: 2025-01-29 12:00:02.192 [INFO][4145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35220b040c6 ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:02.206792 containerd[1439]: 2025-01-29 12:00:02.195 [INFO][4145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:02.206792 containerd[1439]: 2025-01-29 12:00:02.196 [INFO][4145] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4ff45c42-a0b4-469d-ad79-6fe025edff50", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3", Pod:"coredns-6f6b679f8f-hmlcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35220b040c6", MAC:"22:a9:68:83:8a:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:02.206792 containerd[1439]: 2025-01-29 12:00:02.203 [INFO][4145] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-hmlcz" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:02.224006 containerd[1439]: time="2025-01-29T12:00:02.223937783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:02.224006 containerd[1439]: time="2025-01-29T12:00:02.223993624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:02.224159 containerd[1439]: time="2025-01-29T12:00:02.224004584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:02.224159 containerd[1439]: time="2025-01-29T12:00:02.224078986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:02.241792 systemd[1]: Started cri-containerd-99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3.scope - libcontainer container 99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3. Jan 29 12:00:02.250667 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:00:02.266852 containerd[1439]: time="2025-01-29T12:00:02.266820708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hmlcz,Uid:4ff45c42-a0b4-469d-ad79-6fe025edff50,Namespace:kube-system,Attempt:1,} returns sandbox id \"99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3\"" Jan 29 12:00:02.268632 kubelet[2463]: E0129 12:00:02.268589 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:02.272612 containerd[1439]: time="2025-01-29T12:00:02.272171519Z" level=info msg="CreateContainer within sandbox \"99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:00:02.285289 containerd[1439]: time="2025-01-29T12:00:02.285249638Z" level=info msg="CreateContainer within sandbox \"99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af5b75cda0de4bbbc7358523ee9cee5f51c4610d271b09bea7840d5f7534f166\"" Jan 29 12:00:02.285740 containerd[1439]: time="2025-01-29T12:00:02.285716009Z" level=info msg="StartContainer for \"af5b75cda0de4bbbc7358523ee9cee5f51c4610d271b09bea7840d5f7534f166\"" Jan 29 12:00:02.299569 systemd-networkd[1379]: cali943b162ef65: Link UP Jan 29 12:00:02.300124 systemd-networkd[1379]: cali943b162ef65: Gained carrier Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.029 [INFO][4161] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--86qtp-eth0 coredns-6f6b679f8f- kube-system d12f38cd-3b79-4618-9f94-7138faae5b37 894 0 2025-01-29 11:59:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-86qtp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali943b162ef65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.029 [INFO][4161] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.061 [INFO][4188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" HandleID="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.072 [INFO][4188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" HandleID="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003735f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-86qtp", "timestamp":"2025-01-29 12:00:02.061413179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.073 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.189 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.189 [INFO][4188] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.266 [INFO][4188] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.274 [INFO][4188] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.278 [INFO][4188] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.279 [INFO][4188] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.281 [INFO][4188] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.281 [INFO][4188] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.283 [INFO][4188] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.287 [INFO][4188] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.293 [INFO][4188] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.293 [INFO][4188] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" host="localhost" Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.293 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:02.313626 containerd[1439]: 2025-01-29 12:00:02.293 [INFO][4188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" HandleID="k8s-pod-network.e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:02.314127 containerd[1439]: 2025-01-29 12:00:02.296 [INFO][4161] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--86qtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d12f38cd-3b79-4618-9f94-7138faae5b37", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-86qtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali943b162ef65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:02.314127 containerd[1439]: 2025-01-29 12:00:02.296 [INFO][4161] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:02.314127 containerd[1439]: 2025-01-29 12:00:02.296 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali943b162ef65 ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:02.314127 containerd[1439]: 2025-01-29 12:00:02.300 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:02.314127 containerd[1439]: 2025-01-29 12:00:02.301 [INFO][4161] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--86qtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d12f38cd-3b79-4618-9f94-7138faae5b37", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b", Pod:"coredns-6f6b679f8f-86qtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali943b162ef65", MAC:"d6:f3:dc:77:f6:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:02.314127 containerd[1439]: 2025-01-29 12:00:02.310 [INFO][4161] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-86qtp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:02.315944 systemd[1]: Started cri-containerd-af5b75cda0de4bbbc7358523ee9cee5f51c4610d271b09bea7840d5f7534f166.scope - libcontainer container af5b75cda0de4bbbc7358523ee9cee5f51c4610d271b09bea7840d5f7534f166. Jan 29 12:00:02.337013 containerd[1439]: time="2025-01-29T12:00:02.336873377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:02.337013 containerd[1439]: time="2025-01-29T12:00:02.336950699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:02.337013 containerd[1439]: time="2025-01-29T12:00:02.336983259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:02.338175 containerd[1439]: time="2025-01-29T12:00:02.337270226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:02.346792 containerd[1439]: time="2025-01-29T12:00:02.346749058Z" level=info msg="StartContainer for \"af5b75cda0de4bbbc7358523ee9cee5f51c4610d271b09bea7840d5f7534f166\" returns successfully" Jan 29 12:00:02.358716 systemd[1]: Started cri-containerd-e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b.scope - libcontainer container e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b. Jan 29 12:00:02.369417 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:00:02.390733 containerd[1439]: time="2025-01-29T12:00:02.390680609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-86qtp,Uid:d12f38cd-3b79-4618-9f94-7138faae5b37,Namespace:kube-system,Attempt:1,} returns sandbox id \"e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b\"" Jan 29 12:00:02.391697 kubelet[2463]: E0129 12:00:02.391676 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:02.393569 containerd[1439]: time="2025-01-29T12:00:02.393432196Z" level=info msg="CreateContainer within sandbox \"e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:00:02.407218 containerd[1439]: time="2025-01-29T12:00:02.407167971Z" level=info msg="CreateContainer within sandbox \"e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f52d1aa13de5ab737b1cef44e56b526e16173cb6efb28aefad84b48007d5d407\"" Jan 29 12:00:02.409547 containerd[1439]: time="2025-01-29T12:00:02.409316703Z" level=info msg="StartContainer for \"f52d1aa13de5ab737b1cef44e56b526e16173cb6efb28aefad84b48007d5d407\"" Jan 29 12:00:02.433720 systemd[1]: Started cri-containerd-f52d1aa13de5ab737b1cef44e56b526e16173cb6efb28aefad84b48007d5d407.scope - libcontainer container f52d1aa13de5ab737b1cef44e56b526e16173cb6efb28aefad84b48007d5d407. Jan 29 12:00:02.468079 containerd[1439]: time="2025-01-29T12:00:02.468039815Z" level=info msg="StartContainer for \"f52d1aa13de5ab737b1cef44e56b526e16173cb6efb28aefad84b48007d5d407\" returns successfully" Jan 29 12:00:02.945909 kubelet[2463]: E0129 12:00:02.945879 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:02.948813 kubelet[2463]: E0129 12:00:02.948783 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:02.976581 kubelet[2463]: I0129 12:00:02.971040 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hmlcz" podStartSLOduration=28.971023161 podStartE2EDuration="28.971023161s" podCreationTimestamp="2025-01-29 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:02.958595578 +0000 UTC m=+36.246564887" watchObservedRunningTime="2025-01-29 12:00:02.971023161 +0000 UTC m=+36.258992470" Jan 29 12:00:02.988560 kubelet[2463]: I0129 12:00:02.988448 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-86qtp" podStartSLOduration=28.988434226 podStartE2EDuration="28.988434226s" podCreationTimestamp="2025-01-29 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:02.97220667 +0000 UTC m=+36.260175979" watchObservedRunningTime="2025-01-29 12:00:02.988434226 +0000 UTC m=+36.276403535" Jan 29 12:00:03.497776 systemd-networkd[1379]: cali50ed6c54d90: Gained IPv6LL Jan 29 12:00:03.792326 containerd[1439]: time="2025-01-29T12:00:03.792152397Z" level=info msg="StopPodSandbox for \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\"" Jan 29 12:00:03.793190 containerd[1439]: time="2025-01-29T12:00:03.792153597Z" level=info msg="StopPodSandbox for \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\"" Jan 29 12:00:03.818059 systemd-networkd[1379]: cali943b162ef65: Gained IPv6LL Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.850 [INFO][4476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.850 [INFO][4476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" iface="eth0" netns="/var/run/netns/cni-e1456532-b56d-2756-a496-3ee5c54c7c8f" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.851 [INFO][4476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" iface="eth0" netns="/var/run/netns/cni-e1456532-b56d-2756-a496-3ee5c54c7c8f" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.851 [INFO][4476] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" iface="eth0" netns="/var/run/netns/cni-e1456532-b56d-2756-a496-3ee5c54c7c8f" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.851 [INFO][4476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.851 [INFO][4476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.888 [INFO][4501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.888 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.888 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.897 [WARNING][4501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.897 [INFO][4501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.901 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:03.912199 containerd[1439]: 2025-01-29 12:00:03.909 [INFO][4476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:03.914965 systemd[1]: run-netns-cni\x2de1456532\x2db56d\x2d2756\x2da496\x2d3ee5c54c7c8f.mount: Deactivated successfully. Jan 29 12:00:03.917917 containerd[1439]: time="2025-01-29T12:00:03.917753772Z" level=info msg="TearDown network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\" successfully" Jan 29 12:00:03.917917 containerd[1439]: time="2025-01-29T12:00:03.917793133Z" level=info msg="StopPodSandbox for \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\" returns successfully" Jan 29 12:00:03.918464 containerd[1439]: time="2025-01-29T12:00:03.918409228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdc4b9d-7lzvk,Uid:5a466b9c-61da-4d18-9a7f-3570019f9cfb,Namespace:calico-system,Attempt:1,}" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.858 [INFO][4490] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.858 [INFO][4490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" iface="eth0" netns="/var/run/netns/cni-be155a33-cce4-9dab-67e0-39100deec242" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.859 [INFO][4490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" iface="eth0" netns="/var/run/netns/cni-be155a33-cce4-9dab-67e0-39100deec242" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.859 [INFO][4490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" iface="eth0" netns="/var/run/netns/cni-be155a33-cce4-9dab-67e0-39100deec242" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.859 [INFO][4490] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.859 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.902 [INFO][4506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.902 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.902 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.912 [WARNING][4506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.912 [INFO][4506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.914 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:03.922258 containerd[1439]: 2025-01-29 12:00:03.919 [INFO][4490] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:03.922975 containerd[1439]: time="2025-01-29T12:00:03.922871174Z" level=info msg="TearDown network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\" successfully" Jan 29 12:00:03.922975 containerd[1439]: time="2025-01-29T12:00:03.922896534Z" level=info msg="StopPodSandbox for \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\" returns successfully" Jan 29 12:00:03.924216 containerd[1439]: time="2025-01-29T12:00:03.924188405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-5wcw9,Uid:77595a50-0ab0-4356-96db-16172edd087c,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:00:03.925142 systemd[1]: run-netns-cni\x2dbe155a33\x2dcce4\x2d9dab\x2d67e0\x2d39100deec242.mount: Deactivated successfully. Jan 29 12:00:03.954095 kubelet[2463]: E0129 12:00:03.954049 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:03.954938 kubelet[2463]: E0129 12:00:03.954910 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:04.009760 systemd-networkd[1379]: cali35220b040c6: Gained IPv6LL Jan 29 12:00:04.083742 systemd-networkd[1379]: cali6f1a0ae0c69: Link UP Jan 29 12:00:04.084221 systemd-networkd[1379]: cali6f1a0ae0c69: Gained carrier Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:03.993 [INFO][4519] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0 calico-kube-controllers-59fdc4b9d- calico-system 5a466b9c-61da-4d18-9a7f-3570019f9cfb 937 0 2025-01-29 11:59:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59fdc4b9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59fdc4b9d-7lzvk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f1a0ae0c69 [] []}} ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:03.993 [INFO][4519] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.028 [INFO][4552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" HandleID="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.042 [INFO][4552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" HandleID="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59fdc4b9d-7lzvk", "timestamp":"2025-01-29 12:00:04.028095689 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.042 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.042 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.042 [INFO][4552] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.047 [INFO][4552] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.055 [INFO][4552] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.061 [INFO][4552] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.063 [INFO][4552] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.066 [INFO][4552] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.066 [INFO][4552] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.067 [INFO][4552] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6 Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.071 [INFO][4552] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.078 [INFO][4552] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.078 [INFO][4552] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" host="localhost" Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.079 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:04.101918 containerd[1439]: 2025-01-29 12:00:04.079 [INFO][4552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" HandleID="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:04.102512 containerd[1439]: 2025-01-29 12:00:04.081 [INFO][4519] cni-plugin/k8s.go 386: Populated endpoint ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0", GenerateName:"calico-kube-controllers-59fdc4b9d-", Namespace:"calico-system", SelfLink:"", UID:"5a466b9c-61da-4d18-9a7f-3570019f9cfb", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdc4b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59fdc4b9d-7lzvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f1a0ae0c69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:04.102512 containerd[1439]: 2025-01-29 12:00:04.081 [INFO][4519] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:04.102512 containerd[1439]: 2025-01-29 12:00:04.081 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f1a0ae0c69 ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:04.102512 containerd[1439]: 2025-01-29 12:00:04.084 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:04.102512 containerd[1439]: 2025-01-29 12:00:04.084 [INFO][4519] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0", GenerateName:"calico-kube-controllers-59fdc4b9d-", Namespace:"calico-system", SelfLink:"", UID:"5a466b9c-61da-4d18-9a7f-3570019f9cfb", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdc4b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6", Pod:"calico-kube-controllers-59fdc4b9d-7lzvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f1a0ae0c69", MAC:"de:9e:ba:98:6b:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:04.102512 containerd[1439]: 2025-01-29 12:00:04.096 [INFO][4519] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Namespace="calico-system" Pod="calico-kube-controllers-59fdc4b9d-7lzvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:04.186539 systemd-networkd[1379]: calia89fc1a2bd7: Link UP Jan 29 12:00:04.186817 systemd-networkd[1379]: calia89fc1a2bd7: Gained carrier Jan 29 12:00:04.199879 containerd[1439]: time="2025-01-29T12:00:04.199467957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:03.987 [INFO][4528] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0 calico-apiserver-5c5b759c4- calico-apiserver 77595a50-0ab0-4356-96db-16172edd087c 938 0 2025-01-29 11:59:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5b759c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c5b759c4-5wcw9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia89fc1a2bd7 [] []}} ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:03.987 [INFO][4528] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.039 [INFO][4547] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" HandleID="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.054 [INFO][4547] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" HandleID="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036bde0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c5b759c4-5wcw9", "timestamp":"2025-01-29 12:00:04.03902122 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.054 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.078 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.078 [INFO][4547] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.147 [INFO][4547] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.152 [INFO][4547] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.161 [INFO][4547] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.163 [INFO][4547] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.167 [INFO][4547] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.167 [INFO][4547] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.169 [INFO][4547] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4 Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.174 [INFO][4547] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.180 [INFO][4547] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.180 [INFO][4547] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" host="localhost" Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.180 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:04.200426 containerd[1439]: 2025-01-29 12:00:04.180 [INFO][4547] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" HandleID="k8s-pod-network.865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:04.201001 containerd[1439]: 2025-01-29 12:00:04.185 [INFO][4528] cni-plugin/k8s.go 386: Populated endpoint ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77595a50-0ab0-4356-96db-16172edd087c", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c5b759c4-5wcw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia89fc1a2bd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:04.201001 containerd[1439]: 2025-01-29 12:00:04.185 [INFO][4528] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:04.201001 containerd[1439]: 2025-01-29 12:00:04.185 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia89fc1a2bd7 ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:04.201001 containerd[1439]: 2025-01-29 12:00:04.186 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:04.201001 containerd[1439]: 2025-01-29 12:00:04.187 [INFO][4528] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77595a50-0ab0-4356-96db-16172edd087c", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4", Pod:"calico-apiserver-5c5b759c4-5wcw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia89fc1a2bd7", MAC:"ca:af:7d:de:62:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:04.201001 containerd[1439]: 2025-01-29 12:00:04.195 [INFO][4528] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4" Namespace="calico-apiserver" Pod="calico-apiserver-5c5b759c4-5wcw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:04.201001 containerd[1439]: time="2025-01-29T12:00:04.200137213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:04.201001 containerd[1439]: time="2025-01-29T12:00:04.200159893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:04.201001 containerd[1439]: time="2025-01-29T12:00:04.200248055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:04.222255 containerd[1439]: time="2025-01-29T12:00:04.221627268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:04.222377 containerd[1439]: time="2025-01-29T12:00:04.222347165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:04.222619 containerd[1439]: time="2025-01-29T12:00:04.222368605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:04.222619 containerd[1439]: time="2025-01-29T12:00:04.222477208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:04.235890 containerd[1439]: time="2025-01-29T12:00:04.235114379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:04.237234 containerd[1439]: time="2025-01-29T12:00:04.237202627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 12:00:04.238248 containerd[1439]: time="2025-01-29T12:00:04.238216170Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:04.248429 containerd[1439]: time="2025-01-29T12:00:04.248380445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:04.249026 containerd[1439]: time="2025-01-29T12:00:04.248986819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.021670209s" Jan 29 12:00:04.249077 containerd[1439]: time="2025-01-29T12:00:04.249028139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:00:04.250127 containerd[1439]: time="2025-01-29T12:00:04.250097844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:00:04.251737 containerd[1439]: time="2025-01-29T12:00:04.251431315Z" level=info msg="CreateContainer within sandbox \"000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:00:04.255788 systemd[1]: Started cri-containerd-21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6.scope - libcontainer container 21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6. Jan 29 12:00:04.257301 systemd[1]: Started cri-containerd-865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4.scope - libcontainer container 865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4. Jan 29 12:00:04.270705 containerd[1439]: time="2025-01-29T12:00:04.270583796Z" level=info msg="CreateContainer within sandbox \"000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1874c24117916a1badd2efae0675ecc392abd4724d257da6534fcbcef2d6f042\"" Jan 29 12:00:04.275290 containerd[1439]: time="2025-01-29T12:00:04.271639780Z" level=info msg="StartContainer for \"1874c24117916a1badd2efae0675ecc392abd4724d257da6534fcbcef2d6f042\"" Jan 29 12:00:04.279814 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:00:04.283694 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:00:04.312153 containerd[1439]: time="2025-01-29T12:00:04.312055952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdc4b9d-7lzvk,Uid:5a466b9c-61da-4d18-9a7f-3570019f9cfb,Namespace:calico-system,Attempt:1,} returns sandbox id \"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6\"" Jan 29 12:00:04.319539 containerd[1439]: time="2025-01-29T12:00:04.319496803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5b759c4-5wcw9,Uid:77595a50-0ab0-4356-96db-16172edd087c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4\"" Jan 29 12:00:04.322941 containerd[1439]: time="2025-01-29T12:00:04.322778559Z" level=info msg="CreateContainer within sandbox \"865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:00:04.332111 containerd[1439]: time="2025-01-29T12:00:04.331990091Z" level=info msg="CreateContainer within sandbox \"865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1f336df81c07cbd1b43b38c55b1756103e90c7a9b150a4e44e6509bc3cb1f99d\"" Jan 29 12:00:04.333203 containerd[1439]: time="2025-01-29T12:00:04.332827390Z" level=info msg="StartContainer for \"1f336df81c07cbd1b43b38c55b1756103e90c7a9b150a4e44e6509bc3cb1f99d\"" Jan 29 12:00:04.342014 systemd[1]: Started cri-containerd-1874c24117916a1badd2efae0675ecc392abd4724d257da6534fcbcef2d6f042.scope - libcontainer container 1874c24117916a1badd2efae0675ecc392abd4724d257da6534fcbcef2d6f042. Jan 29 12:00:04.361726 systemd[1]: Started cri-containerd-1f336df81c07cbd1b43b38c55b1756103e90c7a9b150a4e44e6509bc3cb1f99d.scope - libcontainer container 1f336df81c07cbd1b43b38c55b1756103e90c7a9b150a4e44e6509bc3cb1f99d. Jan 29 12:00:04.438123 containerd[1439]: time="2025-01-29T12:00:04.436504459Z" level=info msg="StartContainer for \"1f336df81c07cbd1b43b38c55b1756103e90c7a9b150a4e44e6509bc3cb1f99d\" returns successfully" Jan 29 12:00:04.438123 containerd[1439]: time="2025-01-29T12:00:04.438049175Z" level=info msg="StartContainer for \"1874c24117916a1badd2efae0675ecc392abd4724d257da6534fcbcef2d6f042\" returns successfully" Jan 29 12:00:04.961239 kubelet[2463]: E0129 12:00:04.961205 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:04.994603 kubelet[2463]: I0129 12:00:04.994504 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5b759c4-wj8ct" podStartSLOduration=20.965472119 podStartE2EDuration="24.994486956s" podCreationTimestamp="2025-01-29 11:59:40 +0000 UTC" firstStartedPulling="2025-01-29 12:00:00.220903963 +0000 UTC m=+33.508873272" lastFinishedPulling="2025-01-29 12:00:04.2499188 +0000 UTC m=+37.537888109" observedRunningTime="2025-01-29 12:00:04.994376074 +0000 UTC m=+38.282345343" watchObservedRunningTime="2025-01-29 12:00:04.994486956 +0000 UTC m=+38.282456225" Jan 29 12:00:04.994767 kubelet[2463]: I0129 12:00:04.994694 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5b759c4-5wcw9" podStartSLOduration=24.994687921 podStartE2EDuration="24.994687921s" podCreationTimestamp="2025-01-29 11:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:04.971239621 +0000 UTC m=+38.259208970" watchObservedRunningTime="2025-01-29 12:00:04.994687921 +0000 UTC m=+38.282657230" Jan 29 12:00:05.289709 systemd-networkd[1379]: cali6f1a0ae0c69: Gained IPv6LL Jan 29 12:00:05.673760 systemd-networkd[1379]: calia89fc1a2bd7: Gained IPv6LL Jan 29 12:00:05.962606 kubelet[2463]: I0129 12:00:05.962478 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:00:05.962999 kubelet[2463]: I0129 12:00:05.962478 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:00:07.032742 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:52256.service - OpenSSH per-connection server daemon (10.0.0.1:52256). Jan 29 12:00:07.077749 sshd[4771]: Accepted publickey for core from 10.0.0.1 port 52256 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:07.079376 sshd[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:07.083371 systemd-logind[1420]: New session 10 of user core. Jan 29 12:00:07.095692 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:00:07.287543 sshd[4771]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:07.290812 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:52256.service: Deactivated successfully. Jan 29 12:00:07.292534 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:00:07.293265 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:00:07.294041 systemd-logind[1420]: Removed session 10. Jan 29 12:00:09.654356 containerd[1439]: time="2025-01-29T12:00:09.654313000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:09.655410 containerd[1439]: time="2025-01-29T12:00:09.654985974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 12:00:09.655607 containerd[1439]: time="2025-01-29T12:00:09.655579666Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:09.657721 containerd[1439]: time="2025-01-29T12:00:09.657693309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:09.658468 containerd[1439]: time="2025-01-29T12:00:09.658263320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 5.408004952s" Jan 29 12:00:09.658468 containerd[1439]: time="2025-01-29T12:00:09.658309801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 12:00:09.659397 containerd[1439]: time="2025-01-29T12:00:09.659265821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:00:09.665620 containerd[1439]: time="2025-01-29T12:00:09.665591430Z" level=info msg="CreateContainer within sandbox \"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:00:09.676804 containerd[1439]: time="2025-01-29T12:00:09.676756297Z" level=info msg="CreateContainer within sandbox \"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bad6533843f1c97826e32fc3816b0e9b0ee457c9d6ae7bd73a243a42f08c042c\"" Jan 29 12:00:09.677648 containerd[1439]: time="2025-01-29T12:00:09.677617394Z" level=info msg="StartContainer for \"bad6533843f1c97826e32fc3816b0e9b0ee457c9d6ae7bd73a243a42f08c042c\"" Jan 29 12:00:09.706701 systemd[1]: Started cri-containerd-bad6533843f1c97826e32fc3816b0e9b0ee457c9d6ae7bd73a243a42f08c042c.scope - libcontainer container bad6533843f1c97826e32fc3816b0e9b0ee457c9d6ae7bd73a243a42f08c042c. Jan 29 12:00:09.733923 containerd[1439]: time="2025-01-29T12:00:09.730703395Z" level=info msg="StartContainer for \"bad6533843f1c97826e32fc3816b0e9b0ee457c9d6ae7bd73a243a42f08c042c\" returns successfully" Jan 29 12:00:12.298151 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:52272.service - OpenSSH per-connection server daemon (10.0.0.1:52272). Jan 29 12:00:12.335458 sshd[4825]: Accepted publickey for core from 10.0.0.1 port 52272 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:12.336930 sshd[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:12.340617 systemd-logind[1420]: New session 11 of user core. Jan 29 12:00:12.349722 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:00:12.505626 sshd[4825]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:12.518188 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:52272.service: Deactivated successfully. Jan 29 12:00:12.519744 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:00:12.520989 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:00:12.522235 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:53430.service - OpenSSH per-connection server daemon (10.0.0.1:53430). Jan 29 12:00:12.525273 systemd-logind[1420]: Removed session 11. Jan 29 12:00:12.562255 sshd[4840]: Accepted publickey for core from 10.0.0.1 port 53430 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:12.563577 sshd[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:12.566722 systemd-logind[1420]: New session 12 of user core. Jan 29 12:00:12.578760 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:00:12.800727 sshd[4840]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:12.810302 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:53430.service: Deactivated successfully. Jan 29 12:00:12.813786 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:00:12.816369 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:00:12.824065 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:53440.service - OpenSSH per-connection server daemon (10.0.0.1:53440). Jan 29 12:00:12.825703 systemd-logind[1420]: Removed session 12. Jan 29 12:00:12.858265 sshd[4852]: Accepted publickey for core from 10.0.0.1 port 53440 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:12.859449 sshd[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:12.863195 systemd-logind[1420]: New session 13 of user core. Jan 29 12:00:12.868768 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:00:13.018516 sshd[4852]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:13.022036 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:53440.service: Deactivated successfully. Jan 29 12:00:13.023710 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:00:13.024228 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:00:13.025089 systemd-logind[1420]: Removed session 13. Jan 29 12:00:13.864938 containerd[1439]: time="2025-01-29T12:00:13.864890318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:13.865920 containerd[1439]: time="2025-01-29T12:00:13.865638212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 12:00:13.867295 containerd[1439]: time="2025-01-29T12:00:13.866626870Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:13.872675 containerd[1439]: time="2025-01-29T12:00:13.872642303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:13.873495 containerd[1439]: time="2025-01-29T12:00:13.873464679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.214167337s" Jan 29 12:00:13.873589 containerd[1439]: time="2025-01-29T12:00:13.873496999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 12:00:13.874524 containerd[1439]: time="2025-01-29T12:00:13.874482098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:00:13.883480 containerd[1439]: time="2025-01-29T12:00:13.882378166Z" level=info msg="CreateContainer within sandbox \"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:00:13.895583 containerd[1439]: time="2025-01-29T12:00:13.895527292Z" level=info msg="CreateContainer within sandbox \"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\"" Jan 29 12:00:13.896681 containerd[1439]: time="2025-01-29T12:00:13.896647953Z" level=info msg="StartContainer for \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\"" Jan 29 12:00:13.934760 systemd[1]: Started cri-containerd-0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09.scope - libcontainer container 0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09. Jan 29 12:00:13.974513 containerd[1439]: time="2025-01-29T12:00:13.974466492Z" level=info msg="StartContainer for \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\" returns successfully" Jan 29 12:00:15.031042 kubelet[2463]: I0129 12:00:15.030211 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59fdc4b9d-7lzvk" podStartSLOduration=25.469268166 podStartE2EDuration="35.030192679s" podCreationTimestamp="2025-01-29 11:59:40 +0000 UTC" firstStartedPulling="2025-01-29 12:00:04.313244819 +0000 UTC m=+37.601214128" lastFinishedPulling="2025-01-29 12:00:13.874169332 +0000 UTC m=+47.162138641" observedRunningTime="2025-01-29 12:00:13.994763152 +0000 UTC m=+47.282732461" watchObservedRunningTime="2025-01-29 12:00:15.030192679 +0000 UTC m=+48.318161948" Jan 29 12:00:17.047442 containerd[1439]: time="2025-01-29T12:00:17.047394227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:17.048391 containerd[1439]: time="2025-01-29T12:00:17.047835194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 12:00:17.048815 containerd[1439]: time="2025-01-29T12:00:17.048744850Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:17.051465 containerd[1439]: time="2025-01-29T12:00:17.051412297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:17.052383 containerd[1439]: time="2025-01-29T12:00:17.052326713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 3.177812975s" Jan 29 12:00:17.052383 containerd[1439]: time="2025-01-29T12:00:17.052372354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 12:00:17.054968 containerd[1439]: time="2025-01-29T12:00:17.054943199Z" level=info msg="CreateContainer within sandbox \"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:00:17.074152 containerd[1439]: time="2025-01-29T12:00:17.074101614Z" level=info msg="CreateContainer within sandbox \"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65c49737be8f718b1617c1635a1cc54952c47a67d0aae4678c74086b7686d4fe\"" Jan 29 12:00:17.074980 containerd[1439]: time="2025-01-29T12:00:17.074939549Z" level=info msg="StartContainer for \"65c49737be8f718b1617c1635a1cc54952c47a67d0aae4678c74086b7686d4fe\"" Jan 29 12:00:17.106796 systemd[1]: Started cri-containerd-65c49737be8f718b1617c1635a1cc54952c47a67d0aae4678c74086b7686d4fe.scope - libcontainer container 65c49737be8f718b1617c1635a1cc54952c47a67d0aae4678c74086b7686d4fe. Jan 29 12:00:17.132970 containerd[1439]: time="2025-01-29T12:00:17.132930124Z" level=info msg="StartContainer for \"65c49737be8f718b1617c1635a1cc54952c47a67d0aae4678c74086b7686d4fe\" returns successfully" Jan 29 12:00:17.875777 kubelet[2463]: I0129 12:00:17.875741 2463 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:00:17.883329 kubelet[2463]: I0129 12:00:17.883290 2463 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:00:18.029426 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:53446.service - OpenSSH per-connection server daemon (10.0.0.1:53446). Jan 29 12:00:18.082235 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 53446 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:18.084925 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:18.089011 systemd-logind[1420]: New session 14 of user core. Jan 29 12:00:18.097721 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:00:18.286412 sshd[5006]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:18.290410 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:53446.service: Deactivated successfully. Jan 29 12:00:18.292074 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:00:18.292731 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:00:18.293691 systemd-logind[1420]: Removed session 14. Jan 29 12:00:23.300162 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:53414.service - OpenSSH per-connection server daemon (10.0.0.1:53414). Jan 29 12:00:23.339674 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 53414 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:23.340837 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:23.345374 systemd-logind[1420]: New session 15 of user core. Jan 29 12:00:23.354809 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:00:23.498389 sshd[5048]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:23.501956 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:00:23.502484 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:53414.service: Deactivated successfully. Jan 29 12:00:23.504623 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:00:23.505486 systemd-logind[1420]: Removed session 15. Jan 29 12:00:24.421066 kubelet[2463]: I0129 12:00:24.420990 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:00:24.441337 kubelet[2463]: I0129 12:00:24.440827 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-99dlc" podStartSLOduration=29.559476374 podStartE2EDuration="44.440810631s" podCreationTimestamp="2025-01-29 11:59:40 +0000 UTC" firstStartedPulling="2025-01-29 12:00:02.171683028 +0000 UTC m=+35.459652297" lastFinishedPulling="2025-01-29 12:00:17.053017245 +0000 UTC m=+50.340986554" observedRunningTime="2025-01-29 12:00:18.006644533 +0000 UTC m=+51.294613842" watchObservedRunningTime="2025-01-29 12:00:24.440810631 +0000 UTC m=+57.728779940" Jan 29 12:00:26.802524 containerd[1439]: time="2025-01-29T12:00:26.802458434Z" level=info msg="StopPodSandbox for \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\"" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.838 [WARNING][5082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--99dlc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a967c1a-dfaf-44db-9a3a-468c81bc933d", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499", Pod:"csi-node-driver-99dlc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50ed6c54d90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.839 [INFO][5082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.839 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" iface="eth0" netns="" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.839 [INFO][5082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.839 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.858 [INFO][5089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.858 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.858 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.866 [WARNING][5089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.866 [INFO][5089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.867 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:26.873644 containerd[1439]: 2025-01-29 12:00:26.871 [INFO][5082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.873644 containerd[1439]: time="2025-01-29T12:00:26.873491545Z" level=info msg="TearDown network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\" successfully" Jan 29 12:00:26.873644 containerd[1439]: time="2025-01-29T12:00:26.873514346Z" level=info msg="StopPodSandbox for \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\" returns successfully" Jan 29 12:00:26.875378 containerd[1439]: time="2025-01-29T12:00:26.874830446Z" level=info msg="RemovePodSandbox for \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\"" Jan 29 12:00:26.875378 containerd[1439]: time="2025-01-29T12:00:26.874859167Z" level=info msg="Forcibly stopping sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\"" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.917 [WARNING][5111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--99dlc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a967c1a-dfaf-44db-9a3a-468c81bc933d", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4959182a95aee7b68c39e7d1f69be1e6a26f549b8e54c942ba697b326b80499", Pod:"csi-node-driver-99dlc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50ed6c54d90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.917 [INFO][5111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.917 [INFO][5111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" iface="eth0" netns="" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.917 [INFO][5111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.917 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.938 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.938 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.938 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.946 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.946 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" HandleID="k8s-pod-network.7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Workload="localhost-k8s-csi--node--driver--99dlc-eth0" Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.947 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:26.951967 containerd[1439]: 2025-01-29 12:00:26.950 [INFO][5111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34" Jan 29 12:00:26.951967 containerd[1439]: time="2025-01-29T12:00:26.951949053Z" level=info msg="TearDown network for sandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\" successfully" Jan 29 12:00:27.000007 containerd[1439]: time="2025-01-29T12:00:26.999949884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:27.000155 containerd[1439]: time="2025-01-29T12:00:27.000032365Z" level=info msg="RemovePodSandbox \"7088bab99797930dee354b54eba7c5ab02240167f5febd30a0438e4ce45e7b34\" returns successfully" Jan 29 12:00:27.001246 containerd[1439]: time="2025-01-29T12:00:27.001012420Z" level=info msg="StopPodSandbox for \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\"" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.041 [WARNING][5141] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0", GenerateName:"calico-kube-controllers-59fdc4b9d-", Namespace:"calico-system", SelfLink:"", UID:"5a466b9c-61da-4d18-9a7f-3570019f9cfb", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdc4b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6", Pod:"calico-kube-controllers-59fdc4b9d-7lzvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f1a0ae0c69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.041 [INFO][5141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.041 [INFO][5141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" iface="eth0" netns="" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.041 [INFO][5141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.041 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.059 [INFO][5148] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.059 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.060 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.069 [WARNING][5148] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.069 [INFO][5148] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.071 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.076321 containerd[1439]: 2025-01-29 12:00:27.074 [INFO][5141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.076321 containerd[1439]: time="2025-01-29T12:00:27.076140945Z" level=info msg="TearDown network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\" successfully" Jan 29 12:00:27.076321 containerd[1439]: time="2025-01-29T12:00:27.076167585Z" level=info msg="StopPodSandbox for \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\" returns successfully" Jan 29 12:00:27.077116 containerd[1439]: time="2025-01-29T12:00:27.076910036Z" level=info msg="RemovePodSandbox for \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\"" Jan 29 12:00:27.077116 containerd[1439]: time="2025-01-29T12:00:27.076937197Z" level=info msg="Forcibly stopping sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\"" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.113 [WARNING][5171] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0", GenerateName:"calico-kube-controllers-59fdc4b9d-", Namespace:"calico-system", SelfLink:"", UID:"5a466b9c-61da-4d18-9a7f-3570019f9cfb", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdc4b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6", Pod:"calico-kube-controllers-59fdc4b9d-7lzvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f1a0ae0c69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.113 [INFO][5171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.114 [INFO][5171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" iface="eth0" netns="" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.114 [INFO][5171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.114 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.137 [INFO][5178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.137 [INFO][5178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.138 [INFO][5178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.145 [WARNING][5178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.145 [INFO][5178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" HandleID="k8s-pod-network.1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.147 [INFO][5178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.151675 containerd[1439]: 2025-01-29 12:00:27.149 [INFO][5171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89" Jan 29 12:00:27.151675 containerd[1439]: time="2025-01-29T12:00:27.151022025Z" level=info msg="TearDown network for sandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\" successfully" Jan 29 12:00:27.154241 containerd[1439]: time="2025-01-29T12:00:27.154206474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:27.154364 containerd[1439]: time="2025-01-29T12:00:27.154347996Z" level=info msg="RemovePodSandbox \"1beb19fab5533c0ade1f411ee4294922c105375acdc30b0d024caf45f2623c89\" returns successfully" Jan 29 12:00:27.155149 containerd[1439]: time="2025-01-29T12:00:27.154844124Z" level=info msg="StopPodSandbox for \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\"" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.189 [WARNING][5200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77595a50-0ab0-4356-96db-16172edd087c", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4", Pod:"calico-apiserver-5c5b759c4-5wcw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia89fc1a2bd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.190 [INFO][5200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.190 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" iface="eth0" netns="" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.190 [INFO][5200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.190 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.214 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.214 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.214 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.223 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.223 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.224 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.227993 containerd[1439]: 2025-01-29 12:00:27.226 [INFO][5200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.228520 containerd[1439]: time="2025-01-29T12:00:27.227953777Z" level=info msg="TearDown network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\" successfully" Jan 29 12:00:27.228520 containerd[1439]: time="2025-01-29T12:00:27.228405424Z" level=info msg="StopPodSandbox for \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\" returns successfully" Jan 29 12:00:27.229049 containerd[1439]: time="2025-01-29T12:00:27.229015393Z" level=info msg="RemovePodSandbox for \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\"" Jan 29 12:00:27.229094 containerd[1439]: time="2025-01-29T12:00:27.229050274Z" level=info msg="Forcibly stopping sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\"" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.269 [WARNING][5231] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77595a50-0ab0-4356-96db-16172edd087c", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"865d066ec74166ae642e916e276e178d7892f7805f22d9ccefa760d53913ecd4", Pod:"calico-apiserver-5c5b759c4-5wcw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia89fc1a2bd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.269 [INFO][5231] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.269 [INFO][5231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" iface="eth0" netns="" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.269 [INFO][5231] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.269 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.288 [INFO][5239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.289 [INFO][5239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.289 [INFO][5239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.297 [WARNING][5239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.297 [INFO][5239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" HandleID="k8s-pod-network.aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Workload="localhost-k8s-calico--apiserver--5c5b759c4--5wcw9-eth0" Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.298 [INFO][5239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.301954 containerd[1439]: 2025-01-29 12:00:27.300 [INFO][5231] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1" Jan 29 12:00:27.302644 containerd[1439]: time="2025-01-29T12:00:27.302518012Z" level=info msg="TearDown network for sandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\" successfully" Jan 29 12:00:27.305666 containerd[1439]: time="2025-01-29T12:00:27.305627781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:27.305752 containerd[1439]: time="2025-01-29T12:00:27.305702622Z" level=info msg="RemovePodSandbox \"aca0789e17871f5876b6142afeb8435f9172ebab513deb8b69ff12468a6211a1\" returns successfully" Jan 29 12:00:27.306162 containerd[1439]: time="2025-01-29T12:00:27.306137949Z" level=info msg="StopPodSandbox for \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\"" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.342 [WARNING][5262] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--86qtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d12f38cd-3b79-4618-9f94-7138faae5b37", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b", Pod:"coredns-6f6b679f8f-86qtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali943b162ef65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.342 [INFO][5262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.342 [INFO][5262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" iface="eth0" netns="" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.342 [INFO][5262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.343 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.361 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.361 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.361 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.369 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.369 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.370 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.373855 containerd[1439]: 2025-01-29 12:00:27.372 [INFO][5262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.374254 containerd[1439]: time="2025-01-29T12:00:27.373909999Z" level=info msg="TearDown network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\" successfully" Jan 29 12:00:27.374254 containerd[1439]: time="2025-01-29T12:00:27.373936079Z" level=info msg="StopPodSandbox for \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\" returns successfully" Jan 29 12:00:27.374400 containerd[1439]: time="2025-01-29T12:00:27.374376646Z" level=info msg="RemovePodSandbox for \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\"" Jan 29 12:00:27.374434 containerd[1439]: time="2025-01-29T12:00:27.374410126Z" level=info msg="Forcibly stopping sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\"" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.408 [WARNING][5291] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--86qtp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d12f38cd-3b79-4618-9f94-7138faae5b37", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e285a6a9c868347ba7d002f38dba3b4d372aac6f46a07926485dfebea0248a0b", Pod:"coredns-6f6b679f8f-86qtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali943b162ef65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.409 [INFO][5291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.409 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" iface="eth0" netns="" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.409 [INFO][5291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.409 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.427 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.427 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.427 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.435 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.435 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" HandleID="k8s-pod-network.9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Workload="localhost-k8s-coredns--6f6b679f8f--86qtp-eth0" Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.436 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.440252 containerd[1439]: 2025-01-29 12:00:27.438 [INFO][5291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6" Jan 29 12:00:27.440675 containerd[1439]: time="2025-01-29T12:00:27.440312548Z" level=info msg="TearDown network for sandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\" successfully" Jan 29 12:00:27.442950 containerd[1439]: time="2025-01-29T12:00:27.442915228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:27.443024 containerd[1439]: time="2025-01-29T12:00:27.442975389Z" level=info msg="RemovePodSandbox \"9ea732c6156fcad221ee0095e25597c9adbbe8e881553edd59d66c7bb0e42ad6\" returns successfully" Jan 29 12:00:27.443393 containerd[1439]: time="2025-01-29T12:00:27.443357715Z" level=info msg="StopPodSandbox for \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\"" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.476 [WARNING][5321] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4ff45c42-a0b4-469d-ad79-6fe025edff50", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3", Pod:"coredns-6f6b679f8f-hmlcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35220b040c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.477 [INFO][5321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.477 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" iface="eth0" netns="" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.477 [INFO][5321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.477 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.496 [INFO][5328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.496 [INFO][5328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.496 [INFO][5328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.504 [WARNING][5328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.504 [INFO][5328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.506 [INFO][5328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.509862 containerd[1439]: 2025-01-29 12:00:27.508 [INFO][5321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.509862 containerd[1439]: time="2025-01-29T12:00:27.509742464Z" level=info msg="TearDown network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\" successfully" Jan 29 12:00:27.509862 containerd[1439]: time="2025-01-29T12:00:27.509766264Z" level=info msg="StopPodSandbox for \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\" returns successfully" Jan 29 12:00:27.510285 containerd[1439]: time="2025-01-29T12:00:27.510173470Z" level=info msg="RemovePodSandbox for \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\"" Jan 29 12:00:27.510285 containerd[1439]: time="2025-01-29T12:00:27.510201631Z" level=info msg="Forcibly stopping sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\"" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.544 [WARNING][5351] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4ff45c42-a0b4-469d-ad79-6fe025edff50", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99ce087ea884c96747fc11c8e320a45523ecef4dd4167d580e7d802ba35617f3", Pod:"coredns-6f6b679f8f-hmlcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35220b040c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.545 [INFO][5351] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.545 [INFO][5351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" iface="eth0" netns="" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.545 [INFO][5351] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.545 [INFO][5351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.563 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.563 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.563 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.572 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.572 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" HandleID="k8s-pod-network.9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Workload="localhost-k8s-coredns--6f6b679f8f--hmlcz-eth0" Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.573 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.576946 containerd[1439]: 2025-01-29 12:00:27.575 [INFO][5351] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968" Jan 29 12:00:27.577373 containerd[1439]: time="2025-01-29T12:00:27.576972385Z" level=info msg="TearDown network for sandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\" successfully" Jan 29 12:00:27.579827 containerd[1439]: time="2025-01-29T12:00:27.579758548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:27.579901 containerd[1439]: time="2025-01-29T12:00:27.579882790Z" level=info msg="RemovePodSandbox \"9a414c68dddcd8ebe336717fc14923bb27ef7db55b3fb0ab3804c4b7c97c5968\" returns successfully" Jan 29 12:00:27.580406 containerd[1439]: time="2025-01-29T12:00:27.580371478Z" level=info msg="StopPodSandbox for \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\"" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.614 [WARNING][5382] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"58dd87da-9a85-4796-b71a-4cb357793754", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0", Pod:"calico-apiserver-5c5b759c4-wj8ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795e6eb4b83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.615 [INFO][5382] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.615 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" iface="eth0" netns="" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.615 [INFO][5382] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.615 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.633 [INFO][5389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.633 [INFO][5389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.634 [INFO][5389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.642 [WARNING][5389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.642 [INFO][5389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.643 [INFO][5389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.646844 containerd[1439]: 2025-01-29 12:00:27.645 [INFO][5382] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.646844 containerd[1439]: time="2025-01-29T12:00:27.646818268Z" level=info msg="TearDown network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\" successfully" Jan 29 12:00:27.646844 containerd[1439]: time="2025-01-29T12:00:27.646843508Z" level=info msg="StopPodSandbox for \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\" returns successfully" Jan 29 12:00:27.648391 containerd[1439]: time="2025-01-29T12:00:27.648359412Z" level=info msg="RemovePodSandbox for \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\"" Jan 29 12:00:27.648438 containerd[1439]: time="2025-01-29T12:00:27.648398132Z" level=info msg="Forcibly stopping sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\"" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.682 [WARNING][5412] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0", GenerateName:"calico-apiserver-5c5b759c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"58dd87da-9a85-4796-b71a-4cb357793754", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5b759c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"000f2d9ed2a257cf7bd2040662bea3fbfba0518acb7b8d7620cd07a185affed0", Pod:"calico-apiserver-5c5b759c4-wj8ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795e6eb4b83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.682 [INFO][5412] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.682 [INFO][5412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" iface="eth0" netns="" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.682 [INFO][5412] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.682 [INFO][5412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.701 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.702 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.702 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.709 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.709 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" HandleID="k8s-pod-network.6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Workload="localhost-k8s-calico--apiserver--5c5b759c4--wj8ct-eth0" Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.711 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:27.714276 containerd[1439]: 2025-01-29 12:00:27.712 [INFO][5412] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633" Jan 29 12:00:27.714701 containerd[1439]: time="2025-01-29T12:00:27.714310833Z" level=info msg="TearDown network for sandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\" successfully" Jan 29 12:00:27.718599 containerd[1439]: time="2025-01-29T12:00:27.717150637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:27.718599 containerd[1439]: time="2025-01-29T12:00:27.717244319Z" level=info msg="RemovePodSandbox \"6e437709c941875227f5fbe4cdf558b11c5199269d196a93c98427835f676633\" returns successfully" Jan 29 12:00:28.510234 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:53418.service - OpenSSH per-connection server daemon (10.0.0.1:53418). Jan 29 12:00:28.563368 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 53418 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:28.564665 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:28.568340 systemd-logind[1420]: New session 16 of user core. Jan 29 12:00:28.578716 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:00:28.735358 sshd[5429]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:28.739131 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:53418.service: Deactivated successfully. Jan 29 12:00:28.742014 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:00:28.742623 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:00:28.743411 systemd-logind[1420]: Removed session 16. Jan 29 12:00:33.746471 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:57616.service - OpenSSH per-connection server daemon (10.0.0.1:57616). Jan 29 12:00:33.782725 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 57616 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:33.783958 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:33.787635 systemd-logind[1420]: New session 17 of user core. Jan 29 12:00:33.794747 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:00:33.975708 sshd[5445]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:33.982939 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:57616.service: Deactivated successfully. Jan 29 12:00:33.984267 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:00:33.986107 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:00:33.986958 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). Jan 29 12:00:33.987805 systemd-logind[1420]: Removed session 17. Jan 29 12:00:34.043132 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:34.044459 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:34.048251 systemd-logind[1420]: New session 18 of user core. Jan 29 12:00:34.054696 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:00:34.300672 sshd[5460]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:34.312028 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:57632.service: Deactivated successfully. Jan 29 12:00:34.314195 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:00:34.315479 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:00:34.325362 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:57638.service - OpenSSH per-connection server daemon (10.0.0.1:57638). Jan 29 12:00:34.326804 systemd-logind[1420]: Removed session 18. Jan 29 12:00:34.359450 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 57638 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:34.360754 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:34.364251 systemd-logind[1420]: New session 19 of user core. Jan 29 12:00:34.372928 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:00:35.697870 kubelet[2463]: I0129 12:00:35.697821 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:00:35.871387 sshd[5479]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:35.878913 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:57638.service: Deactivated successfully. Jan 29 12:00:35.881261 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:00:35.882107 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:00:35.888881 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:57654.service - OpenSSH per-connection server daemon (10.0.0.1:57654). Jan 29 12:00:35.891098 systemd-logind[1420]: Removed session 19. Jan 29 12:00:35.930927 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 57654 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:35.932187 sshd[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:35.936107 systemd-logind[1420]: New session 20 of user core. Jan 29 12:00:35.946773 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:00:36.277335 sshd[5501]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:36.288054 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:57654.service: Deactivated successfully. Jan 29 12:00:36.289448 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:00:36.292693 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:00:36.314936 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:57666.service - OpenSSH per-connection server daemon (10.0.0.1:57666). Jan 29 12:00:36.316098 systemd-logind[1420]: Removed session 20. Jan 29 12:00:36.347168 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 57666 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:36.348074 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:36.351955 systemd-logind[1420]: New session 21 of user core. Jan 29 12:00:36.357683 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:00:36.477477 sshd[5513]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:36.480783 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:57666.service: Deactivated successfully. Jan 29 12:00:36.482386 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:00:36.482972 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:00:36.483890 systemd-logind[1420]: Removed session 21. Jan 29 12:00:41.491721 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:57672.service - OpenSSH per-connection server daemon (10.0.0.1:57672). Jan 29 12:00:41.528268 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 57672 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:41.530507 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:41.539935 systemd-logind[1420]: New session 22 of user core. Jan 29 12:00:41.546723 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:00:41.700817 sshd[5533]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:41.703395 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:57672.service: Deactivated successfully. Jan 29 12:00:41.705675 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:00:41.707371 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:00:41.709233 systemd-logind[1420]: Removed session 22. Jan 29 12:00:42.597481 containerd[1439]: time="2025-01-29T12:00:42.597397456Z" level=info msg="StopContainer for \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\" with timeout 300 (s)" Jan 29 12:00:42.598138 containerd[1439]: time="2025-01-29T12:00:42.597855114Z" level=info msg="Stop container \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\" with signal terminated" Jan 29 12:00:42.904616 containerd[1439]: time="2025-01-29T12:00:42.904468796Z" level=info msg="StopContainer for \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\" with timeout 5 (s)" Jan 29 12:00:42.905419 containerd[1439]: time="2025-01-29T12:00:42.904961615Z" level=info msg="Stop container \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\" with signal terminated" Jan 29 12:00:42.924193 systemd[1]: cri-containerd-cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762.scope: Deactivated successfully. Jan 29 12:00:42.924798 systemd[1]: cri-containerd-cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762.scope: Consumed 1.789s CPU time. Jan 29 12:00:42.950157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762-rootfs.mount: Deactivated successfully. Jan 29 12:00:42.950851 containerd[1439]: time="2025-01-29T12:00:42.948151123Z" level=info msg="shim disconnected" id=cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762 namespace=k8s.io Jan 29 12:00:42.950851 containerd[1439]: time="2025-01-29T12:00:42.950845269Z" level=warning msg="cleaning up after shim disconnected" id=cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762 namespace=k8s.io Jan 29 12:00:42.950955 containerd[1439]: time="2025-01-29T12:00:42.950861710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:42.972776 containerd[1439]: time="2025-01-29T12:00:42.972730174Z" level=info msg="StopContainer for \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\" returns successfully" Jan 29 12:00:42.973315 containerd[1439]: time="2025-01-29T12:00:42.973286996Z" level=info msg="StopPodSandbox for \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\"" Jan 29 12:00:42.973355 containerd[1439]: time="2025-01-29T12:00:42.973333158Z" level=info msg="Container to stop \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:42.973390 containerd[1439]: time="2025-01-29T12:00:42.973350999Z" level=info msg="Container to stop \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:42.973390 containerd[1439]: time="2025-01-29T12:00:42.973362239Z" level=info msg="Container to stop \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:42.979596 systemd[1]: cri-containerd-dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14.scope: Deactivated successfully. Jan 29 12:00:43.007000 containerd[1439]: time="2025-01-29T12:00:43.006958003Z" level=info msg="StopContainer for \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\" with timeout 30 (s)" Jan 29 12:00:43.008663 containerd[1439]: time="2025-01-29T12:00:43.007883799Z" level=info msg="Stop container \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\" with signal terminated" Jan 29 12:00:43.017874 containerd[1439]: time="2025-01-29T12:00:43.015570218Z" level=info msg="shim disconnected" id=dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14 namespace=k8s.io Jan 29 12:00:43.017874 containerd[1439]: time="2025-01-29T12:00:43.015612779Z" level=warning msg="cleaning up after shim disconnected" id=dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14 namespace=k8s.io Jan 29 12:00:43.017874 containerd[1439]: time="2025-01-29T12:00:43.015621420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:43.025135 systemd[1]: cri-containerd-0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09.scope: Deactivated successfully. Jan 29 12:00:43.039207 containerd[1439]: time="2025-01-29T12:00:43.039157614Z" level=info msg="TearDown network for sandbox \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" successfully" Jan 29 12:00:43.039207 containerd[1439]: time="2025-01-29T12:00:43.039196175Z" level=info msg="StopPodSandbox for \"dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14\" returns successfully" Jan 29 12:00:43.058256 kubelet[2463]: I0129 12:00:43.058215 2463 scope.go:117] "RemoveContainer" containerID="cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762" Jan 29 12:00:43.062577 containerd[1439]: time="2025-01-29T12:00:43.062010661Z" level=info msg="RemoveContainer for \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\"" Jan 29 12:00:43.066814 containerd[1439]: time="2025-01-29T12:00:43.065933174Z" level=info msg="shim disconnected" id=0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09 namespace=k8s.io Jan 29 12:00:43.067272 containerd[1439]: time="2025-01-29T12:00:43.066923012Z" level=warning msg="cleaning up after shim disconnected" id=0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09 namespace=k8s.io Jan 29 12:00:43.067272 containerd[1439]: time="2025-01-29T12:00:43.066947653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:43.085943 kubelet[2463]: E0129 12:00:43.084826 2463 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a18d7af-e059-4392-b34d-01b16c571209" containerName="install-cni" Jan 29 12:00:43.085943 kubelet[2463]: E0129 12:00:43.084860 2463 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a18d7af-e059-4392-b34d-01b16c571209" containerName="calico-node" Jan 29 12:00:43.085943 kubelet[2463]: E0129 12:00:43.084868 2463 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a18d7af-e059-4392-b34d-01b16c571209" containerName="flexvol-driver" Jan 29 12:00:43.085943 kubelet[2463]: I0129 12:00:43.084896 2463 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a18d7af-e059-4392-b34d-01b16c571209" containerName="calico-node" Jan 29 12:00:43.087767 containerd[1439]: time="2025-01-29T12:00:43.087304044Z" level=info msg="RemoveContainer for \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\" returns successfully" Jan 29 12:00:43.089570 kubelet[2463]: I0129 12:00:43.089538 2463 scope.go:117] "RemoveContainer" containerID="f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a" Jan 29 12:00:43.092459 containerd[1439]: time="2025-01-29T12:00:43.092424562Z" level=info msg="RemoveContainer for \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\"" Jan 29 12:00:43.093922 containerd[1439]: time="2025-01-29T12:00:43.093871659Z" level=info msg="StopContainer for \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\" returns successfully" Jan 29 12:00:43.094247 containerd[1439]: time="2025-01-29T12:00:43.094215832Z" level=info msg="StopPodSandbox for \"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6\"" Jan 29 12:00:43.094290 containerd[1439]: time="2025-01-29T12:00:43.094249833Z" level=info msg="Container to stop \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:43.096986 systemd[1]: Created slice kubepods-besteffort-pod201e63ab_c986_42fe_a2be_44191c5d4a5b.slice - libcontainer container kubepods-besteffort-pod201e63ab_c986_42fe_a2be_44191c5d4a5b.slice. Jan 29 12:00:43.098639 containerd[1439]: time="2025-01-29T12:00:43.098234988Z" level=info msg="RemoveContainer for \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\" returns successfully" Jan 29 12:00:43.099223 kubelet[2463]: I0129 12:00:43.098950 2463 scope.go:117] "RemoveContainer" containerID="332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064" Jan 29 12:00:43.100224 containerd[1439]: time="2025-01-29T12:00:43.100200585Z" level=info msg="RemoveContainer for \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\"" Jan 29 12:00:43.106642 kubelet[2463]: I0129 12:00:43.106150 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-xtables-lock\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106642 kubelet[2463]: I0129 12:00:43.106203 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a18d7af-e059-4392-b34d-01b16c571209-tigera-ca-bundle\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106642 kubelet[2463]: I0129 12:00:43.106248 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a18d7af-e059-4392-b34d-01b16c571209-node-certs\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106642 kubelet[2463]: I0129 12:00:43.106282 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n6n2\" (UniqueName: \"kubernetes.io/projected/0a18d7af-e059-4392-b34d-01b16c571209-kube-api-access-2n6n2\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106642 kubelet[2463]: I0129 12:00:43.106302 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-policysync\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106642 kubelet[2463]: I0129 12:00:43.106318 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-net-dir\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106853 kubelet[2463]: I0129 12:00:43.106335 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-run-calico\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106853 kubelet[2463]: I0129 12:00:43.106357 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-flexvol-driver-host\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106853 kubelet[2463]: I0129 12:00:43.106378 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-bin-dir\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106853 kubelet[2463]: I0129 12:00:43.106394 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-lib-modules\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106853 kubelet[2463]: I0129 12:00:43.106409 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-log-dir\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106853 kubelet[2463]: I0129 12:00:43.106431 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-lib-calico\") pod \"0a18d7af-e059-4392-b34d-01b16c571209\" (UID: \"0a18d7af-e059-4392-b34d-01b16c571209\") " Jan 29 12:00:43.106985 kubelet[2463]: I0129 12:00:43.106473 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/201e63ab-c986-42fe-a2be-44191c5d4a5b-tigera-ca-bundle\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.106985 kubelet[2463]: I0129 12:00:43.106495 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-cni-net-dir\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.106985 kubelet[2463]: I0129 12:00:43.106522 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-lib-modules\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.106985 kubelet[2463]: I0129 12:00:43.106537 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-var-run-calico\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.106985 kubelet[2463]: I0129 12:00:43.106568 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-xtables-lock\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.107087 kubelet[2463]: I0129 12:00:43.106588 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-var-lib-calico\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.107087 kubelet[2463]: I0129 12:00:43.106607 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-cni-bin-dir\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.107406 kubelet[2463]: I0129 12:00:43.106626 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-cni-log-dir\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.107406 kubelet[2463]: I0129 12:00:43.107176 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-policysync\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.107406 kubelet[2463]: I0129 12:00:43.107196 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/201e63ab-c986-42fe-a2be-44191c5d4a5b-node-certs\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.107406 kubelet[2463]: I0129 12:00:43.107228 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqmx\" (UniqueName: \"kubernetes.io/projected/201e63ab-c986-42fe-a2be-44191c5d4a5b-kube-api-access-rfqmx\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.107406 kubelet[2463]: I0129 12:00:43.107250 2463 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/201e63ab-c986-42fe-a2be-44191c5d4a5b-flexvol-driver-host\") pod \"calico-node-plchl\" (UID: \"201e63ab-c986-42fe-a2be-44191c5d4a5b\") " pod="calico-system/calico-node-plchl" Jan 29 12:00:43.109353 containerd[1439]: time="2025-01-29T12:00:43.109307138Z" level=info msg="RemoveContainer for \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\" returns successfully" Jan 29 12:00:43.110603 kubelet[2463]: I0129 12:00:43.110212 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.110603 kubelet[2463]: I0129 12:00:43.110402 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.111945 kubelet[2463]: I0129 12:00:43.111911 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.112035 kubelet[2463]: I0129 12:00:43.111978 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.112035 kubelet[2463]: I0129 12:00:43.112003 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.112035 kubelet[2463]: I0129 12:00:43.112022 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.112121 kubelet[2463]: I0129 12:00:43.112045 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.112121 kubelet[2463]: I0129 12:00:43.112064 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.112619 kubelet[2463]: I0129 12:00:43.112594 2463 scope.go:117] "RemoveContainer" containerID="cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762" Jan 29 12:00:43.112734 kubelet[2463]: I0129 12:00:43.112703 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-policysync" (OuterVolumeSpecName: "policysync") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:43.113283 containerd[1439]: time="2025-01-29T12:00:43.113204330Z" level=error msg="ContainerStatus for \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\": not found" Jan 29 12:00:43.114455 kubelet[2463]: I0129 12:00:43.114415 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a18d7af-e059-4392-b34d-01b16c571209-kube-api-access-2n6n2" (OuterVolumeSpecName: "kube-api-access-2n6n2") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "kube-api-access-2n6n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:43.114798 systemd[1]: cri-containerd-21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6.scope: Deactivated successfully. Jan 29 12:00:43.115769 kubelet[2463]: I0129 12:00:43.115712 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a18d7af-e059-4392-b34d-01b16c571209-node-certs" (OuterVolumeSpecName: "node-certs") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:00:43.115894 kubelet[2463]: I0129 12:00:43.115861 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a18d7af-e059-4392-b34d-01b16c571209-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0a18d7af-e059-4392-b34d-01b16c571209" (UID: "0a18d7af-e059-4392-b34d-01b16c571209"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:43.120225 kubelet[2463]: E0129 12:00:43.120190 2463 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\": not found" containerID="cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762" Jan 29 12:00:43.120400 kubelet[2463]: I0129 12:00:43.120232 2463 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762"} err="failed to get container status \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\": rpc error: code = NotFound desc = an error occurred when try to find container \"cea4fbb5855a5c6cb0c4fb925121770b954d0dfe227d66ddb9b3387ecb3e7762\": not found" Jan 29 12:00:43.120400 kubelet[2463]: I0129 12:00:43.120257 2463 scope.go:117] "RemoveContainer" containerID="f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a" Jan 29 12:00:43.126701 containerd[1439]: time="2025-01-29T12:00:43.126307398Z" level=error msg="ContainerStatus for \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\": not found" Jan 29 12:00:43.126816 kubelet[2463]: E0129 12:00:43.126630 2463 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\": not found" containerID="f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a" Jan 29 12:00:43.126816 kubelet[2463]: I0129 12:00:43.126654 2463 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a"} err="failed to get container status \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6580e7fb7e5077e60de779e12ca5bb1feaa579db6e54652b2c3157f46e6774a\": not found" Jan 29 12:00:43.126816 kubelet[2463]: I0129 12:00:43.126711 2463 scope.go:117] "RemoveContainer" containerID="332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064" Jan 29 12:00:43.128610 containerd[1439]: time="2025-01-29T12:00:43.126914782Z" level=error msg="ContainerStatus for \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\": not found" Jan 29 12:00:43.128694 kubelet[2463]: E0129 12:00:43.127654 2463 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\": not found" containerID="332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064" Jan 29 12:00:43.128694 kubelet[2463]: I0129 12:00:43.127679 2463 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064"} err="failed to get container status \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\": rpc error: code = NotFound desc = an error occurred when try to find container \"332000ed7ca8a43b977af8673afbd73378598b759c6df410417196b04cb02064\": not found" Jan 29 12:00:43.147788 containerd[1439]: time="2025-01-29T12:00:43.147744751Z" level=info msg="shim disconnected" id=21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6 namespace=k8s.io Jan 29 12:00:43.147902 containerd[1439]: time="2025-01-29T12:00:43.147885596Z" level=warning msg="cleaning up after shim disconnected" id=21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6 namespace=k8s.io Jan 29 12:00:43.147958 containerd[1439]: time="2025-01-29T12:00:43.147945279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:43.171376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09-rootfs.mount: Deactivated successfully. Jan 29 12:00:43.171745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6-rootfs.mount: Deactivated successfully. Jan 29 12:00:43.171805 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6-shm.mount: Deactivated successfully. Jan 29 12:00:43.171860 systemd[1]: var-lib-kubelet-pods-0a18d7af\x2de059\x2d4392\x2db34d\x2d01b16c571209-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 12:00:43.171922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14-rootfs.mount: Deactivated successfully. Jan 29 12:00:43.171967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc45be87652e7198b679ce4405ee91d992e237e61e0652f092e56d7fb1d88e14-shm.mount: Deactivated successfully. Jan 29 12:00:43.172015 systemd[1]: var-lib-kubelet-pods-0a18d7af\x2de059\x2d4392\x2db34d\x2d01b16c571209-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2n6n2.mount: Deactivated successfully. Jan 29 12:00:43.172064 systemd[1]: var-lib-kubelet-pods-0a18d7af\x2de059\x2d4392\x2db34d\x2d01b16c571209-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207644 2463 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207674 2463 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207683 2463 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a18d7af-e059-4392-b34d-01b16c571209-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207691 2463 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a18d7af-e059-4392-b34d-01b16c571209-node-certs\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207700 2463 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-policysync\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207710 2463 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207725 2463 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.208815 kubelet[2463]: I0129 12:00:43.207736 2463 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.209114 kubelet[2463]: I0129 12:00:43.207744 2463 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.209114 kubelet[2463]: I0129 12:00:43.207752 2463 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2n6n2\" (UniqueName: \"kubernetes.io/projected/0a18d7af-e059-4392-b34d-01b16c571209-kube-api-access-2n6n2\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.209114 kubelet[2463]: I0129 12:00:43.207760 2463 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.209114 kubelet[2463]: I0129 12:00:43.207767 2463 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a18d7af-e059-4392-b34d-01b16c571209-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.218293 systemd-networkd[1379]: cali6f1a0ae0c69: Link DOWN Jan 29 12:00:43.218299 systemd-networkd[1379]: cali6f1a0ae0c69: Lost carrier Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.216 [INFO][5767] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.216 [INFO][5767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" iface="eth0" netns="/var/run/netns/cni-e456b9db-279e-6e6f-1e27-11ebb7a95130" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.217 [INFO][5767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" iface="eth0" netns="/var/run/netns/cni-e456b9db-279e-6e6f-1e27-11ebb7a95130" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.232 [INFO][5767] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" after=15.845536ms iface="eth0" netns="/var/run/netns/cni-e456b9db-279e-6e6f-1e27-11ebb7a95130" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.232 [INFO][5767] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.233 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.253 [INFO][5779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" HandleID="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.253 [INFO][5779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.253 [INFO][5779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.283 [INFO][5779] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" HandleID="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.283 [INFO][5779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" HandleID="k8s-pod-network.21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Workload="localhost-k8s-calico--kube--controllers--59fdc4b9d--7lzvk-eth0" Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.285 [INFO][5779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:00:43.289203 containerd[1439]: 2025-01-29 12:00:43.287 [INFO][5767] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6" Jan 29 12:00:43.290700 containerd[1439]: time="2025-01-29T12:00:43.289406813Z" level=info msg="TearDown network for sandbox \"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6\" successfully" Jan 29 12:00:43.290700 containerd[1439]: time="2025-01-29T12:00:43.289442414Z" level=info msg="StopPodSandbox for \"21a431d33e3b2271d54e2acb22aaca6be13df7db9a19b98ced3055ee409a7cd6\" returns successfully" Jan 29 12:00:43.291515 systemd[1]: run-netns-cni\x2de456b9db\x2d279e\x2d6e6f\x2d1e27\x2d11ebb7a95130.mount: Deactivated successfully. Jan 29 12:00:43.308968 kubelet[2463]: I0129 12:00:43.308681 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a466b9c-61da-4d18-9a7f-3570019f9cfb-tigera-ca-bundle\") pod \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\" (UID: \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\") " Jan 29 12:00:43.308968 kubelet[2463]: I0129 12:00:43.308732 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7hmf\" (UniqueName: \"kubernetes.io/projected/5a466b9c-61da-4d18-9a7f-3570019f9cfb-kube-api-access-x7hmf\") pod \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\" (UID: \"5a466b9c-61da-4d18-9a7f-3570019f9cfb\") " Jan 29 12:00:43.311725 kubelet[2463]: I0129 12:00:43.311687 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a466b9c-61da-4d18-9a7f-3570019f9cfb-kube-api-access-x7hmf" (OuterVolumeSpecName: "kube-api-access-x7hmf") pod "5a466b9c-61da-4d18-9a7f-3570019f9cfb" (UID: "5a466b9c-61da-4d18-9a7f-3570019f9cfb"). InnerVolumeSpecName "kube-api-access-x7hmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:43.312717 kubelet[2463]: I0129 12:00:43.312689 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a466b9c-61da-4d18-9a7f-3570019f9cfb-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5a466b9c-61da-4d18-9a7f-3570019f9cfb" (UID: "5a466b9c-61da-4d18-9a7f-3570019f9cfb"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:43.312831 systemd[1]: var-lib-kubelet-pods-5a466b9c\x2d61da\x2d4d18\x2d9a7f\x2d3570019f9cfb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx7hmf.mount: Deactivated successfully. Jan 29 12:00:43.365257 systemd[1]: Removed slice kubepods-besteffort-pod0a18d7af_e059_4392_b34d_01b16c571209.slice - libcontainer container kubepods-besteffort-pod0a18d7af_e059_4392_b34d_01b16c571209.slice. Jan 29 12:00:43.365346 systemd[1]: kubepods-besteffort-pod0a18d7af_e059_4392_b34d_01b16c571209.slice: Consumed 2.289s CPU time. Jan 29 12:00:43.402761 kubelet[2463]: E0129 12:00:43.402727 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:43.403381 containerd[1439]: time="2025-01-29T12:00:43.403334438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-plchl,Uid:201e63ab-c986-42fe-a2be-44191c5d4a5b,Namespace:calico-system,Attempt:0,}" Jan 29 12:00:43.409814 kubelet[2463]: I0129 12:00:43.409749 2463 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a466b9c-61da-4d18-9a7f-3570019f9cfb-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.409814 kubelet[2463]: I0129 12:00:43.409784 2463 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x7hmf\" (UniqueName: \"kubernetes.io/projected/5a466b9c-61da-4d18-9a7f-3570019f9cfb-kube-api-access-x7hmf\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:43.420270 containerd[1439]: time="2025-01-29T12:00:43.420062807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:43.420270 containerd[1439]: time="2025-01-29T12:00:43.420125130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:43.420270 containerd[1439]: time="2025-01-29T12:00:43.420151011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:43.420270 containerd[1439]: time="2025-01-29T12:00:43.420221053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:43.438712 systemd[1]: Started cri-containerd-af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef.scope - libcontainer container af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef. Jan 29 12:00:43.457430 containerd[1439]: time="2025-01-29T12:00:43.457396057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-plchl,Uid:201e63ab-c986-42fe-a2be-44191c5d4a5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef\"" Jan 29 12:00:43.458106 kubelet[2463]: E0129 12:00:43.458088 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:43.460074 containerd[1439]: time="2025-01-29T12:00:43.460044240Z" level=info msg="CreateContainer within sandbox \"af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:00:43.479883 containerd[1439]: time="2025-01-29T12:00:43.479689563Z" level=info msg="CreateContainer within sandbox \"af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b\"" Jan 29 12:00:43.480144 containerd[1439]: time="2025-01-29T12:00:43.480105219Z" level=info msg="StartContainer for \"68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b\"" Jan 29 12:00:43.508697 systemd[1]: Started cri-containerd-68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b.scope - libcontainer container 68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b. Jan 29 12:00:43.532451 containerd[1439]: time="2025-01-29T12:00:43.530692424Z" level=info msg="StartContainer for \"68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b\" returns successfully" Jan 29 12:00:43.572888 systemd[1]: cri-containerd-68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b.scope: Deactivated successfully. Jan 29 12:00:43.607804 containerd[1439]: time="2025-01-29T12:00:43.607739256Z" level=info msg="shim disconnected" id=68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b namespace=k8s.io Jan 29 12:00:43.607804 containerd[1439]: time="2025-01-29T12:00:43.607790978Z" level=warning msg="cleaning up after shim disconnected" id=68248ba9ddaae9e23cab9758b2608c2b7c771bdbb3113c827185bab501509a2b namespace=k8s.io Jan 29 12:00:43.607804 containerd[1439]: time="2025-01-29T12:00:43.607801899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:44.061759 kubelet[2463]: E0129 12:00:44.061542 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:44.065609 containerd[1439]: time="2025-01-29T12:00:44.065569553Z" level=info msg="CreateContainer within sandbox \"af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:00:44.066263 kubelet[2463]: I0129 12:00:44.066087 2463 scope.go:117] "RemoveContainer" containerID="0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09" Jan 29 12:00:44.067899 containerd[1439]: time="2025-01-29T12:00:44.067843320Z" level=info msg="RemoveContainer for \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\"" Jan 29 12:00:44.074105 systemd[1]: Removed slice kubepods-besteffort-pod5a466b9c_61da_4d18_9a7f_3570019f9cfb.slice - libcontainer container kubepods-besteffort-pod5a466b9c_61da_4d18_9a7f_3570019f9cfb.slice. Jan 29 12:00:44.075251 containerd[1439]: time="2025-01-29T12:00:44.075208441Z" level=info msg="RemoveContainer for \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\" returns successfully" Jan 29 12:00:44.075511 kubelet[2463]: I0129 12:00:44.075408 2463 scope.go:117] "RemoveContainer" containerID="0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09" Jan 29 12:00:44.075771 containerd[1439]: time="2025-01-29T12:00:44.075729661Z" level=error msg="ContainerStatus for \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\": not found" Jan 29 12:00:44.075894 kubelet[2463]: E0129 12:00:44.075875 2463 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\": not found" containerID="0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09" Jan 29 12:00:44.075932 kubelet[2463]: I0129 12:00:44.075904 2463 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09"} err="failed to get container status \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d10d55f6e6a65752e81fe206d6f50008a2e85a2d5f2313f8539ed731dc43b09\": not found" Jan 29 12:00:44.082792 containerd[1439]: time="2025-01-29T12:00:44.082743849Z" level=info msg="CreateContainer within sandbox \"af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d\"" Jan 29 12:00:44.083780 containerd[1439]: time="2025-01-29T12:00:44.083753007Z" level=info msg="StartContainer for \"ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d\"" Jan 29 12:00:44.118838 systemd[1]: Started cri-containerd-ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d.scope - libcontainer container ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d. Jan 29 12:00:44.152502 containerd[1439]: time="2025-01-29T12:00:44.152451349Z" level=info msg="StartContainer for \"ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d\" returns successfully" Jan 29 12:00:44.173768 systemd[1]: var-lib-kubelet-pods-5a466b9c\x2d61da\x2d4d18\x2d9a7f\x2d3570019f9cfb-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 29 12:00:44.592195 systemd[1]: cri-containerd-ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d.scope: Deactivated successfully. Jan 29 12:00:44.609525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d-rootfs.mount: Deactivated successfully. Jan 29 12:00:44.700666 containerd[1439]: time="2025-01-29T12:00:44.700539906Z" level=info msg="shim disconnected" id=ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d namespace=k8s.io Jan 29 12:00:44.700666 containerd[1439]: time="2025-01-29T12:00:44.700625589Z" level=warning msg="cleaning up after shim disconnected" id=ae859d69c006e06b8a3ae41f5d1f50852932e8b69d16e3d70576ae09f8abd87d namespace=k8s.io Jan 29 12:00:44.700666 containerd[1439]: time="2025-01-29T12:00:44.700641150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:44.794483 kubelet[2463]: I0129 12:00:44.794435 2463 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a18d7af-e059-4392-b34d-01b16c571209" path="/var/lib/kubelet/pods/0a18d7af-e059-4392-b34d-01b16c571209/volumes" Jan 29 12:00:44.794963 kubelet[2463]: I0129 12:00:44.794942 2463 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a466b9c-61da-4d18-9a7f-3570019f9cfb" path="/var/lib/kubelet/pods/5a466b9c-61da-4d18-9a7f-3570019f9cfb/volumes" Jan 29 12:00:45.072218 kubelet[2463]: E0129 12:00:45.072118 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:45.097225 containerd[1439]: time="2025-01-29T12:00:45.097161819Z" level=info msg="CreateContainer within sandbox \"af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:00:45.110759 containerd[1439]: time="2025-01-29T12:00:45.110584482Z" level=info msg="CreateContainer within sandbox \"af9a8095df26244d28f97d948378afc58f94ac437b9ec8c3996de1b5e46412ef\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48ade79c7656a3fc5f39e9f1bd2c12a0fc26726153a979a1b8287358950666ff\"" Jan 29 12:00:45.112717 containerd[1439]: time="2025-01-29T12:00:45.111055460Z" level=info msg="StartContainer for \"48ade79c7656a3fc5f39e9f1bd2c12a0fc26726153a979a1b8287358950666ff\"" Jan 29 12:00:45.145802 systemd[1]: Started cri-containerd-48ade79c7656a3fc5f39e9f1bd2c12a0fc26726153a979a1b8287358950666ff.scope - libcontainer container 48ade79c7656a3fc5f39e9f1bd2c12a0fc26726153a979a1b8287358950666ff. Jan 29 12:00:45.169279 containerd[1439]: time="2025-01-29T12:00:45.169234882Z" level=info msg="StartContainer for \"48ade79c7656a3fc5f39e9f1bd2c12a0fc26726153a979a1b8287358950666ff\" returns successfully" Jan 29 12:00:46.075546 kubelet[2463]: E0129 12:00:46.075354 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:46.723874 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:51730.service - OpenSSH per-connection server daemon (10.0.0.1:51730). Jan 29 12:00:46.725203 systemd[1]: cri-containerd-b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda.scope: Deactivated successfully. Jan 29 12:00:46.748975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda-rootfs.mount: Deactivated successfully. Jan 29 12:00:46.749462 containerd[1439]: time="2025-01-29T12:00:46.749330156Z" level=info msg="shim disconnected" id=b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda namespace=k8s.io Jan 29 12:00:46.749462 containerd[1439]: time="2025-01-29T12:00:46.749389718Z" level=warning msg="cleaning up after shim disconnected" id=b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda namespace=k8s.io Jan 29 12:00:46.749462 containerd[1439]: time="2025-01-29T12:00:46.749398359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:46.762345 containerd[1439]: time="2025-01-29T12:00:46.762273033Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:00:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:00:46.775933 sshd[6214]: Accepted publickey for core from 10.0.0.1 port 51730 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:46.777598 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:46.783646 systemd-logind[1420]: New session 23 of user core. Jan 29 12:00:46.789728 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:00:46.792226 kubelet[2463]: E0129 12:00:46.792192 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:46.796539 containerd[1439]: time="2025-01-29T12:00:46.796498736Z" level=info msg="StopContainer for \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\" returns successfully" Jan 29 12:00:46.797326 containerd[1439]: time="2025-01-29T12:00:46.797250803Z" level=info msg="StopPodSandbox for \"05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3\"" Jan 29 12:00:46.797389 containerd[1439]: time="2025-01-29T12:00:46.797362287Z" level=info msg="Container to stop \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:46.800421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3-shm.mount: Deactivated successfully. Jan 29 12:00:46.805543 systemd[1]: cri-containerd-05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3.scope: Deactivated successfully. Jan 29 12:00:46.830071 containerd[1439]: time="2025-01-29T12:00:46.829835445Z" level=info msg="shim disconnected" id=05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3 namespace=k8s.io Jan 29 12:00:46.830071 containerd[1439]: time="2025-01-29T12:00:46.829886247Z" level=warning msg="cleaning up after shim disconnected" id=05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3 namespace=k8s.io Jan 29 12:00:46.830071 containerd[1439]: time="2025-01-29T12:00:46.829896447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:46.831544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3-rootfs.mount: Deactivated successfully. Jan 29 12:00:46.847738 containerd[1439]: time="2025-01-29T12:00:46.847690783Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:00:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:00:46.855766 containerd[1439]: time="2025-01-29T12:00:46.855714719Z" level=info msg="TearDown network for sandbox \"05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3\" successfully" Jan 29 12:00:46.855766 containerd[1439]: time="2025-01-29T12:00:46.855748360Z" level=info msg="StopPodSandbox for \"05d48287547560f5a067dbcf49d7593f1e0f2e91ae6fbecffd9b63aa1a50fac3\" returns successfully" Jan 29 12:00:46.883381 kubelet[2463]: I0129 12:00:46.883198 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-plchl" podStartSLOduration=3.883181732 podStartE2EDuration="3.883181732s" podCreationTimestamp="2025-01-29 12:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:46.092246846 +0000 UTC m=+79.380216155" watchObservedRunningTime="2025-01-29 12:00:46.883181732 +0000 UTC m=+80.171151041" Jan 29 12:00:46.935174 kubelet[2463]: I0129 12:00:46.935133 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xplng\" (UniqueName: \"kubernetes.io/projected/afb8489e-728e-4846-bbac-a9a60ea63ce4-kube-api-access-xplng\") pod \"afb8489e-728e-4846-bbac-a9a60ea63ce4\" (UID: \"afb8489e-728e-4846-bbac-a9a60ea63ce4\") " Jan 29 12:00:46.935351 kubelet[2463]: I0129 12:00:46.935187 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afb8489e-728e-4846-bbac-a9a60ea63ce4-typha-certs\") pod \"afb8489e-728e-4846-bbac-a9a60ea63ce4\" (UID: \"afb8489e-728e-4846-bbac-a9a60ea63ce4\") " Jan 29 12:00:46.935351 kubelet[2463]: I0129 12:00:46.935209 2463 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afb8489e-728e-4846-bbac-a9a60ea63ce4-tigera-ca-bundle\") pod \"afb8489e-728e-4846-bbac-a9a60ea63ce4\" (UID: \"afb8489e-728e-4846-bbac-a9a60ea63ce4\") " Jan 29 12:00:46.938918 kubelet[2463]: I0129 12:00:46.938726 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb8489e-728e-4846-bbac-a9a60ea63ce4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "afb8489e-728e-4846-bbac-a9a60ea63ce4" (UID: "afb8489e-728e-4846-bbac-a9a60ea63ce4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:46.938918 kubelet[2463]: I0129 12:00:46.938838 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb8489e-728e-4846-bbac-a9a60ea63ce4-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "afb8489e-728e-4846-bbac-a9a60ea63ce4" (UID: "afb8489e-728e-4846-bbac-a9a60ea63ce4"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:00:46.941302 kubelet[2463]: I0129 12:00:46.939822 2463 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb8489e-728e-4846-bbac-a9a60ea63ce4-kube-api-access-xplng" (OuterVolumeSpecName: "kube-api-access-xplng") pod "afb8489e-728e-4846-bbac-a9a60ea63ce4" (UID: "afb8489e-728e-4846-bbac-a9a60ea63ce4"). InnerVolumeSpecName "kube-api-access-xplng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:46.940570 systemd[1]: var-lib-kubelet-pods-afb8489e\x2d728e\x2d4846\x2dbbac\x2da9a60ea63ce4-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 29 12:00:46.944245 systemd[1]: var-lib-kubelet-pods-afb8489e\x2d728e\x2d4846\x2dbbac\x2da9a60ea63ce4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 29 12:00:46.944483 systemd[1]: var-lib-kubelet-pods-afb8489e\x2d728e\x2d4846\x2dbbac\x2da9a60ea63ce4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxplng.mount: Deactivated successfully. Jan 29 12:00:47.035941 kubelet[2463]: I0129 12:00:47.035830 2463 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xplng\" (UniqueName: \"kubernetes.io/projected/afb8489e-728e-4846-bbac-a9a60ea63ce4-kube-api-access-xplng\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:47.035941 kubelet[2463]: I0129 12:00:47.035864 2463 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afb8489e-728e-4846-bbac-a9a60ea63ce4-typha-certs\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:47.035941 kubelet[2463]: I0129 12:00:47.035876 2463 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afb8489e-728e-4846-bbac-a9a60ea63ce4-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 12:00:47.074845 sshd[6214]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:47.077648 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:00:47.079038 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:51730.service: Deactivated successfully. Jan 29 12:00:47.080006 kubelet[2463]: I0129 12:00:47.079975 2463 scope.go:117] "RemoveContainer" containerID="b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda" Jan 29 12:00:47.082497 containerd[1439]: time="2025-01-29T12:00:47.082456630Z" level=info msg="RemoveContainer for \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\"" Jan 29 12:00:47.084846 kubelet[2463]: E0129 12:00:47.084820 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:00:47.085249 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:00:47.087936 systemd[1]: Removed slice kubepods-besteffort-podafb8489e_728e_4846_bbac_a9a60ea63ce4.slice - libcontainer container kubepods-besteffort-podafb8489e_728e_4846_bbac_a9a60ea63ce4.slice. Jan 29 12:00:47.088811 systemd-logind[1420]: Removed session 23. Jan 29 12:00:47.089342 containerd[1439]: time="2025-01-29T12:00:47.089268997Z" level=info msg="RemoveContainer for \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\" returns successfully" Jan 29 12:00:47.090384 containerd[1439]: time="2025-01-29T12:00:47.089822657Z" level=error msg="ContainerStatus for \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\": not found" Jan 29 12:00:47.090665 kubelet[2463]: I0129 12:00:47.089473 2463 scope.go:117] "RemoveContainer" containerID="b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda" Jan 29 12:00:47.090665 kubelet[2463]: E0129 12:00:47.089969 2463 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\": not found" containerID="b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda" Jan 29 12:00:47.090665 kubelet[2463]: I0129 12:00:47.089999 2463 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda"} err="failed to get container status \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4243e83840b2339b60928b1acec6685a961fcc5297ca2a4f1dc54c7b4c28bda\": not found" Jan 29 12:00:48.798188 kubelet[2463]: I0129 12:00:48.797412 2463 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb8489e-728e-4846-bbac-a9a60ea63ce4" path="/var/lib/kubelet/pods/afb8489e-728e-4846-bbac-a9a60ea63ce4/volumes" Jan 29 12:00:52.089638 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:51732.service - OpenSSH per-connection server daemon (10.0.0.1:51732). Jan 29 12:00:52.128160 sshd[6350]: Accepted publickey for core from 10.0.0.1 port 51732 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:00:52.129435 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:52.134578 systemd-logind[1420]: New session 24 of user core. Jan 29 12:00:52.141992 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:00:52.292285 sshd[6350]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:52.294755 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:00:52.296020 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:51732.service: Deactivated successfully. Jan 29 12:00:52.297972 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:00:52.298842 systemd-logind[1420]: Removed session 24. Jan 29 12:00:52.792645 kubelet[2463]: E0129 12:00:52.792427 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"