Jan 13 21:35:26.899170 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:35:26.899192 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:35:26.899201 kernel: KASLR enabled Jan 13 21:35:26.899207 kernel: efi: EFI v2.7 by EDK II Jan 13 21:35:26.899213 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:35:26.899218 kernel: random: crng init done Jan 13 21:35:26.899225 kernel: ACPI: Early table checksum verification disabled Jan 13 21:35:26.899231 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:35:26.899310 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:35:26.899326 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899334 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899340 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899346 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899352 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899359 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899368 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899374 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899381 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:35:26.899387 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:35:26.899393 kernel: NUMA: Failed to initialise from firmware Jan 13 21:35:26.899400 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:35:26.899406 kernel: NUMA: NODE_DATA [mem 0xdc95b800-0xdc960fff] Jan 13 21:35:26.899412 kernel: Zone ranges: Jan 13 21:35:26.899419 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:35:26.899425 kernel: DMA32 empty Jan 13 21:35:26.899432 kernel: Normal empty Jan 13 21:35:26.899439 kernel: Movable zone start for each node Jan 13 21:35:26.899445 kernel: Early memory node ranges Jan 13 21:35:26.899451 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:35:26.899458 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:35:26.899464 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:35:26.899470 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:35:26.899476 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:35:26.899482 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:35:26.899489 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:35:26.899495 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:35:26.899501 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:35:26.899509 kernel: psci: probing for conduit method from ACPI. Jan 13 21:35:26.899515 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:35:26.899522 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:35:26.899531 kernel: psci: Trusted OS migration not required Jan 13 21:35:26.899537 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:35:26.899544 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:35:26.899552 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:35:26.899559 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:35:26.899566 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:35:26.899572 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:35:26.899579 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:35:26.899586 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:35:26.899592 kernel: CPU features: detected: Spectre-v4 Jan 13 21:35:26.899599 kernel: CPU features: detected: Spectre-BHB Jan 13 21:35:26.899606 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:35:26.899612 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:35:26.899620 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:35:26.899627 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:35:26.899634 kernel: alternatives: applying boot alternatives Jan 13 21:35:26.899641 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:35:26.899648 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:35:26.899655 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:35:26.899662 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:35:26.899669 kernel: Fallback order for Node 0: 0 Jan 13 21:35:26.899675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:35:26.899682 kernel: Policy zone: DMA Jan 13 21:35:26.899688 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:35:26.899697 kernel: software IO TLB: area num 4. Jan 13 21:35:26.899703 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:35:26.899710 kernel: Memory: 2386544K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185744K reserved, 0K cma-reserved) Jan 13 21:35:26.899717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:35:26.899724 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:35:26.899731 kernel: rcu: RCU event tracing is enabled. Jan 13 21:35:26.899738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:35:26.899744 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:35:26.899751 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:35:26.899758 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:35:26.899765 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:35:26.899771 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:35:26.899779 kernel: GICv3: 256 SPIs implemented Jan 13 21:35:26.899786 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:35:26.899793 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:35:26.899799 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:35:26.899806 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:35:26.899813 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:35:26.899819 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:35:26.899826 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:35:26.899833 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:35:26.899840 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:35:26.899847 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:35:26.899855 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:35:26.899861 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:35:26.899868 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:35:26.899875 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:35:26.899882 kernel: arm-pv: using stolen time PV Jan 13 21:35:26.899889 kernel: Console: colour dummy device 80x25 Jan 13 21:35:26.899896 kernel: ACPI: Core revision 20230628 Jan 13 21:35:26.899903 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:35:26.899910 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:35:26.899917 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:35:26.899925 kernel: landlock: Up and running. Jan 13 21:35:26.899932 kernel: SELinux: Initializing. Jan 13 21:35:26.899939 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:35:26.899945 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:35:26.899952 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:35:26.899959 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:35:26.899966 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:35:26.899973 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:35:26.899980 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:35:26.899988 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:35:26.899995 kernel: Remapping and enabling EFI services. Jan 13 21:35:26.900002 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:35:26.900009 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:35:26.900015 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:35:26.900023 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:35:26.900030 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:35:26.900037 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:35:26.900043 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:35:26.900050 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:35:26.900059 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:35:26.900066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:35:26.900077 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:35:26.900085 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:35:26.900093 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:35:26.900100 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:35:26.900107 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:35:26.900114 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:35:26.900122 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:35:26.900130 kernel: SMP: Total of 4 processors activated. Jan 13 21:35:26.900138 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:35:26.900145 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:35:26.900152 kernel: CPU features: detected: Common not Private translations Jan 13 21:35:26.900160 kernel: CPU features: detected: CRC32 instructions Jan 13 21:35:26.900167 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:35:26.900174 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:35:26.900181 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:35:26.900189 kernel: CPU features: detected: Privileged Access Never Jan 13 21:35:26.900197 kernel: CPU features: detected: RAS Extension Support Jan 13 21:35:26.900204 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:35:26.900211 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:35:26.900218 kernel: alternatives: applying system-wide alternatives Jan 13 21:35:26.900225 kernel: devtmpfs: initialized Jan 13 21:35:26.900233 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:35:26.900247 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:35:26.900254 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:35:26.900263 kernel: SMBIOS 3.0.0 present. Jan 13 21:35:26.900271 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:35:26.900278 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:35:26.900285 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:35:26.900293 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:35:26.900300 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:35:26.900307 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:35:26.900314 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 13 21:35:26.900448 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:35:26.900465 kernel: cpuidle: using governor menu Jan 13 21:35:26.900572 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:35:26.900581 kernel: ASID allocator initialised with 32768 entries Jan 13 21:35:26.900589 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:35:26.900596 kernel: Serial: AMBA PL011 UART driver Jan 13 21:35:26.900603 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:35:26.900611 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:35:26.900618 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:35:26.900625 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:35:26.900637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:35:26.900645 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:35:26.900652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:35:26.900659 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:35:26.900667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:35:26.900674 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:35:26.900681 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:35:26.900688 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:35:26.900695 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:35:26.900704 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:35:26.900712 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:35:26.900719 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:35:26.900726 kernel: ACPI: Interpreter enabled Jan 13 21:35:26.900733 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:35:26.900740 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:35:26.900748 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:35:26.900755 kernel: printk: console [ttyAMA0] enabled Jan 13 21:35:26.900762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:35:26.900906 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:35:26.900978 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:35:26.901042 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:35:26.901104 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:35:26.901167 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:35:26.901177 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:35:26.901184 kernel: PCI host bridge to bus 0000:00 Jan 13 21:35:26.901275 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:35:26.901347 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:35:26.901405 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:35:26.901462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:35:26.901541 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:35:26.901616 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:35:26.901686 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:35:26.901751 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:35:26.901816 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:35:26.901880 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:35:26.901945 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:35:26.902010 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:35:26.902068 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:35:26.902125 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:35:26.902186 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:35:26.902195 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:35:26.902203 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:35:26.902210 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:35:26.902218 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:35:26.902225 kernel: iommu: Default domain type: Translated Jan 13 21:35:26.902232 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:35:26.902249 kernel: efivars: Registered efivars operations Jan 13 21:35:26.902259 kernel: vgaarb: loaded Jan 13 21:35:26.902266 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:35:26.902273 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:35:26.902280 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:35:26.902288 kernel: pnp: PnP ACPI init Jan 13 21:35:26.902373 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:35:26.902384 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:35:26.902391 kernel: NET: Registered PF_INET protocol family Jan 13 21:35:26.902401 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:35:26.902409 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:35:26.902416 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:35:26.902424 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:35:26.902431 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:35:26.902438 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:35:26.902445 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:35:26.902453 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:35:26.902460 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:35:26.902469 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:35:26.902476 kernel: kvm [1]: HYP mode not available Jan 13 21:35:26.902483 kernel: Initialise system trusted keyrings Jan 13 21:35:26.902491 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:35:26.902498 kernel: Key type asymmetric registered Jan 13 21:35:26.902505 kernel: Asymmetric key parser 'x509' registered Jan 13 21:35:26.902512 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:35:26.902519 kernel: io scheduler mq-deadline registered Jan 13 21:35:26.902526 kernel: io scheduler kyber registered Jan 13 21:35:26.902535 kernel: io scheduler bfq registered Jan 13 21:35:26.902542 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:35:26.902549 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:35:26.902557 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:35:26.902624 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:35:26.902634 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:35:26.902641 kernel: thunder_xcv, ver 1.0 Jan 13 21:35:26.902649 kernel: thunder_bgx, ver 1.0 Jan 13 21:35:26.902656 kernel: nicpf, ver 1.0 Jan 13 21:35:26.902665 kernel: nicvf, ver 1.0 Jan 13 21:35:26.902739 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:35:26.902804 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:35:26 UTC (1736804126) Jan 13 21:35:26.902814 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:35:26.902821 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:35:26.902829 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:35:26.902836 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:35:26.902843 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:35:26.902852 kernel: Segment Routing with IPv6 Jan 13 21:35:26.902859 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:35:26.902867 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:35:26.902874 kernel: Key type dns_resolver registered Jan 13 21:35:26.902881 kernel: registered taskstats version 1 Jan 13 21:35:26.902888 kernel: Loading compiled-in X.509 certificates Jan 13 21:35:26.902896 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:35:26.902903 kernel: Key type .fscrypt registered Jan 13 21:35:26.902910 kernel: Key type fscrypt-provisioning registered Jan 13 21:35:26.902919 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:35:26.902926 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:35:26.902933 kernel: ima: No architecture policies found Jan 13 21:35:26.902940 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:35:26.902947 kernel: clk: Disabling unused clocks Jan 13 21:35:26.902955 kernel: Freeing unused kernel memory: 39360K Jan 13 21:35:26.902962 kernel: Run /init as init process Jan 13 21:35:26.902969 kernel: with arguments: Jan 13 21:35:26.902976 kernel: /init Jan 13 21:35:26.902984 kernel: with environment: Jan 13 21:35:26.902991 kernel: HOME=/ Jan 13 21:35:26.902998 kernel: TERM=linux Jan 13 21:35:26.903005 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:35:26.903014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:35:26.903023 systemd[1]: Detected virtualization kvm. Jan 13 21:35:26.903031 systemd[1]: Detected architecture arm64. Jan 13 21:35:26.903039 systemd[1]: Running in initrd. Jan 13 21:35:26.903048 systemd[1]: No hostname configured, using default hostname. Jan 13 21:35:26.903055 systemd[1]: Hostname set to . Jan 13 21:35:26.903063 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:35:26.903071 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:35:26.903079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:35:26.903087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:35:26.903095 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:35:26.903103 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:35:26.903112 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:35:26.903120 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:35:26.903130 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:35:26.903138 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:35:26.903146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:35:26.903154 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:35:26.903163 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:35:26.903171 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:35:26.903179 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:35:26.903187 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:35:26.903194 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:35:26.903202 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:35:26.903210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:35:26.903218 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:35:26.903226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:35:26.903243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:35:26.903252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:35:26.903259 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:35:26.903267 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:35:26.903275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:35:26.903283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:35:26.903291 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:35:26.903298 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:35:26.903306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:35:26.903316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:35:26.903329 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:35:26.903337 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:35:26.903345 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:35:26.903353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:35:26.903382 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 21:35:26.903401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:35:26.903409 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:35:26.903419 systemd-journald[238]: Journal started Jan 13 21:35:26.903437 systemd-journald[238]: Runtime Journal (/run/log/journal/38d271b40c6c44fa975aa3b6387c3eab) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:35:26.894546 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 21:35:26.907784 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:35:26.908145 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:35:26.911559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:35:26.914297 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:35:26.914379 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 21:35:26.917568 kernel: Bridge firewalling registered Jan 13 21:35:26.915414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:35:26.916661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:35:26.922959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:35:26.925456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:35:26.935427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:35:26.936875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:35:26.939848 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:35:26.944173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:35:26.946648 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:35:26.953998 dracut-cmdline[272]: dracut-dracut-053 Jan 13 21:35:26.956537 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:35:26.972612 systemd-resolved[278]: Positive Trust Anchors: Jan 13 21:35:26.972629 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:35:26.972661 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:35:26.977207 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 13 21:35:26.978505 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:35:26.981578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:35:27.027267 kernel: SCSI subsystem initialized Jan 13 21:35:27.032255 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:35:27.039260 kernel: iscsi: registered transport (tcp) Jan 13 21:35:27.054399 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:35:27.054455 kernel: QLogic iSCSI HBA Driver Jan 13 21:35:27.096485 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:35:27.103372 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:35:27.119719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:35:27.119762 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:35:27.121313 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:35:27.170259 kernel: raid6: neonx8 gen() 15766 MB/s Jan 13 21:35:27.187256 kernel: raid6: neonx4 gen() 15666 MB/s Jan 13 21:35:27.204260 kernel: raid6: neonx2 gen() 13205 MB/s Jan 13 21:35:27.221267 kernel: raid6: neonx1 gen() 10497 MB/s Jan 13 21:35:27.238309 kernel: raid6: int64x8 gen() 6956 MB/s Jan 13 21:35:27.255264 kernel: raid6: int64x4 gen() 7338 MB/s Jan 13 21:35:27.272254 kernel: raid6: int64x2 gen() 6124 MB/s Jan 13 21:35:27.289345 kernel: raid6: int64x1 gen() 5059 MB/s Jan 13 21:35:27.289377 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Jan 13 21:35:27.307312 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Jan 13 21:35:27.307331 kernel: raid6: using neon recovery algorithm Jan 13 21:35:27.312721 kernel: xor: measuring software checksum speed Jan 13 21:35:27.312737 kernel: 8regs : 19769 MB/sec Jan 13 21:35:27.313395 kernel: 32regs : 19679 MB/sec Jan 13 21:35:27.314606 kernel: arm64_neon : 26883 MB/sec Jan 13 21:35:27.314618 kernel: xor: using function: arm64_neon (26883 MB/sec) Jan 13 21:35:27.366267 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:35:27.378102 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:35:27.389464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:35:27.401559 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 13 21:35:27.404847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:35:27.419599 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:35:27.430883 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 13 21:35:27.459723 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:35:27.467477 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:35:27.507003 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:35:27.517433 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:35:27.530843 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:35:27.532845 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:35:27.534614 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:35:27.537732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:35:27.549470 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:35:27.555690 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:35:27.569407 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:35:27.569517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:35:27.569529 kernel: GPT:9289727 != 19775487 Jan 13 21:35:27.569538 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:35:27.569547 kernel: GPT:9289727 != 19775487 Jan 13 21:35:27.569556 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:35:27.569565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:35:27.560190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:35:27.562921 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:35:27.562966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:35:27.570366 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:35:27.572571 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:35:27.572629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:35:27.575046 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:35:27.585352 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (504) Jan 13 21:35:27.585432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:35:27.589262 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (505) Jan 13 21:35:27.596121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:35:27.603691 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:35:27.608162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:35:27.611979 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:35:27.613137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:35:27.619343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:35:27.631448 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:35:27.633163 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:35:27.639047 disk-uuid[547]: Primary Header is updated. Jan 13 21:35:27.639047 disk-uuid[547]: Secondary Entries is updated. Jan 13 21:35:27.639047 disk-uuid[547]: Secondary Header is updated. Jan 13 21:35:27.644547 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:35:27.655874 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:35:28.655263 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:35:28.655911 disk-uuid[548]: The operation has completed successfully. Jan 13 21:35:28.676293 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:35:28.676403 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:35:28.698402 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:35:28.701255 sh[573]: Success Jan 13 21:35:28.717265 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:35:28.756668 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:35:28.758530 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:35:28.759473 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:35:28.771257 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:35:28.771289 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:35:28.771300 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:35:28.771318 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:35:28.772607 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:35:28.775807 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:35:28.777114 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:35:28.787390 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:35:28.789061 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:35:28.795572 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:35:28.795607 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:35:28.795618 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:35:28.798251 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:35:28.806922 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:35:28.808862 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:35:28.813878 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:35:28.820398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:35:28.880518 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:35:28.897422 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:35:28.910569 ignition[668]: Ignition 2.19.0 Jan 13 21:35:28.910580 ignition[668]: Stage: fetch-offline Jan 13 21:35:28.910615 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:35:28.910623 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:35:28.910823 ignition[668]: parsed url from cmdline: "" Jan 13 21:35:28.910826 ignition[668]: no config URL provided Jan 13 21:35:28.910830 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:35:28.910837 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:35:28.910860 ignition[668]: op(1): [started] loading QEMU firmware config module Jan 13 21:35:28.910872 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:35:28.919991 ignition[668]: op(1): [finished] loading QEMU firmware config module Jan 13 21:35:28.920017 ignition[668]: QEMU firmware config was not found. Ignoring... Jan 13 21:35:28.922599 systemd-networkd[763]: lo: Link UP Jan 13 21:35:28.922607 systemd-networkd[763]: lo: Gained carrier Jan 13 21:35:28.923316 systemd-networkd[763]: Enumeration completed Jan 13 21:35:28.923706 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:35:28.925406 systemd[1]: Reached target network.target - Network. Jan 13 21:35:28.926979 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:35:28.926982 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:35:28.927962 systemd-networkd[763]: eth0: Link UP Jan 13 21:35:28.927965 systemd-networkd[763]: eth0: Gained carrier Jan 13 21:35:28.927972 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:35:28.959282 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:35:28.970206 ignition[668]: parsing config with SHA512: 4f12b1b4ca166a04b26edc90e9ffa3fa7e62b596563bf51bdb2500fcc4377e8c1df8d46146c25f162886047e8353f9bc54835dd7859417fd43031d4b8471bc1f Jan 13 21:35:28.974196 unknown[668]: fetched base config from "system" Jan 13 21:35:28.974206 unknown[668]: fetched user config from "qemu" Jan 13 21:35:28.974619 ignition[668]: fetch-offline: fetch-offline passed Jan 13 21:35:28.976586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:35:28.974680 ignition[668]: Ignition finished successfully Jan 13 21:35:28.977795 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:35:28.995428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:35:29.006358 ignition[770]: Ignition 2.19.0 Jan 13 21:35:29.006368 ignition[770]: Stage: kargs Jan 13 21:35:29.006539 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:35:29.006548 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:35:29.008783 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:35:29.007414 ignition[770]: kargs: kargs passed Jan 13 21:35:29.007457 ignition[770]: Ignition finished successfully Jan 13 21:35:29.021402 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:35:29.030387 ignition[779]: Ignition 2.19.0 Jan 13 21:35:29.030398 ignition[779]: Stage: disks Jan 13 21:35:29.030557 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:35:29.030566 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:35:29.033061 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:35:29.031387 ignition[779]: disks: disks passed Jan 13 21:35:29.034361 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:35:29.031429 ignition[779]: Ignition finished successfully Jan 13 21:35:29.036014 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:35:29.037865 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:35:29.039218 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:35:29.041053 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:35:29.052371 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:35:29.062614 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:35:29.066015 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:35:29.079347 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:35:29.127264 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:35:29.127308 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:35:29.128555 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:35:29.141339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:35:29.143048 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:35:29.144494 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:35:29.144536 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:35:29.150939 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (796) Jan 13 21:35:29.150976 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:35:29.144558 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:35:29.156216 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:35:29.156252 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:35:29.156263 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:35:29.148919 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:35:29.155965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:35:29.158003 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:35:29.200471 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:35:29.204664 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:35:29.208057 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:35:29.210995 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:35:29.281550 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:35:29.291330 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:35:29.292837 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:35:29.299291 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:35:29.311523 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:35:29.315876 ignition[910]: INFO : Ignition 2.19.0 Jan 13 21:35:29.315876 ignition[910]: INFO : Stage: mount Jan 13 21:35:29.317390 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:35:29.317390 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:35:29.317390 ignition[910]: INFO : mount: mount passed Jan 13 21:35:29.317390 ignition[910]: INFO : Ignition finished successfully Jan 13 21:35:29.319588 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:35:29.329377 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:35:29.769082 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:35:29.782480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:35:29.788256 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Jan 13 21:35:29.790526 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:35:29.790542 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:35:29.790552 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:35:29.793270 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:35:29.794429 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:35:29.810470 ignition[940]: INFO : Ignition 2.19.0 Jan 13 21:35:29.812588 ignition[940]: INFO : Stage: files Jan 13 21:35:29.812588 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:35:29.812588 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:35:29.812588 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:35:29.816653 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:35:29.816653 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:35:29.816653 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:35:29.816653 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:35:29.821828 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:35:29.821828 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:35:29.821828 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:35:29.816862 unknown[940]: wrote ssh authorized keys file for user: core Jan 13 21:35:29.875018 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:35:30.003806 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:35:30.003806 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:35:30.007535 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 21:35:30.382793 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:35:30.564399 systemd-networkd[763]: eth0: Gained IPv6LL Jan 13 21:35:30.664848 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:35:30.664848 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:35:30.668327 ignition[940]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:35:30.694425 ignition[940]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:35:30.698093 ignition[940]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:35:30.699532 ignition[940]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:35:30.699532 ignition[940]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:35:30.699532 ignition[940]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:35:30.699532 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:35:30.699532 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:35:30.699532 ignition[940]: INFO : files: files passed Jan 13 21:35:30.699532 ignition[940]: INFO : Ignition finished successfully Jan 13 21:35:30.700841 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:35:30.712502 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:35:30.714712 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:35:30.717651 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:35:30.717758 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:35:30.722021 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:35:30.725200 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:35:30.725200 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:35:30.729199 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:35:30.729974 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:35:30.731970 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:35:30.745370 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:35:30.766643 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:35:30.766754 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:35:30.768847 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:35:30.770645 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:35:30.772383 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:35:30.773093 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:35:30.788462 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:35:30.799391 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:35:30.807109 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:35:30.808369 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:35:30.810340 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:35:30.812034 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:35:30.812150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:35:30.814565 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:35:30.816458 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:35:30.818220 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:35:30.820012 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:35:30.821876 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:35:30.823772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:35:30.825528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:35:30.827380 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:35:30.829232 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:35:30.830979 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:35:30.832473 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:35:30.832592 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:35:30.834878 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:35:30.836780 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:35:30.838722 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:35:30.842297 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:35:30.843521 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:35:30.843641 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:35:30.846329 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:35:30.846448 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:35:30.848435 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:35:30.849960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:35:30.854310 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:35:30.855526 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:35:30.857579 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:35:30.859080 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:35:30.859171 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:35:30.860678 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:35:30.860761 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:35:30.862260 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:35:30.862379 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:35:30.864114 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:35:30.864213 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:35:30.879457 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:35:30.880337 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:35:30.880464 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:35:30.883042 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:35:30.883893 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:35:30.884009 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:35:30.886055 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:35:30.886170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:35:30.891021 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:35:30.892267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:35:30.895495 ignition[995]: INFO : Ignition 2.19.0 Jan 13 21:35:30.895495 ignition[995]: INFO : Stage: umount Jan 13 21:35:30.895495 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:35:30.895495 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:35:30.895495 ignition[995]: INFO : umount: umount passed Jan 13 21:35:30.895495 ignition[995]: INFO : Ignition finished successfully Jan 13 21:35:30.895795 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:35:30.895886 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:35:30.897547 systemd[1]: Stopped target network.target - Network. Jan 13 21:35:30.898889 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:35:30.898940 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:35:30.901049 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:35:30.901098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:35:30.903290 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:35:30.903337 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:35:30.904816 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:35:30.904859 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:35:30.906584 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:35:30.909314 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:35:30.911572 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:35:30.914339 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:35:30.914447 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:35:30.916303 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 13 21:35:30.918160 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:35:30.918355 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:35:30.920615 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:35:30.920644 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:35:30.933363 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:35:30.934209 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:35:30.934309 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:35:30.936259 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:35:30.936315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:35:30.938105 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:35:30.938147 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:35:30.940173 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:35:30.940217 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:35:30.942201 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:35:30.952000 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:35:30.952118 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:35:30.956142 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:35:30.956305 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:35:30.958662 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:35:30.958738 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:35:30.962430 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:35:30.962497 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:35:30.963561 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:35:30.963590 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:35:30.965343 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:35:30.965391 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:35:30.967925 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:35:30.967968 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:35:30.970519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:35:30.970559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:35:30.972534 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:35:30.972575 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:35:30.985393 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:35:30.986398 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:35:30.986458 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:35:30.988519 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:35:30.988567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:35:30.990707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:35:30.990786 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:35:30.992768 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:35:30.994909 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:35:31.005391 systemd[1]: Switching root. Jan 13 21:35:31.026394 systemd-journald[238]: Journal stopped Jan 13 21:35:31.739397 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 21:35:31.739459 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:35:31.739474 kernel: SELinux: policy capability open_perms=1 Jan 13 21:35:31.739487 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:35:31.739500 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:35:31.739509 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:35:31.739519 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:35:31.739529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:35:31.739538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:35:31.739548 kernel: audit: type=1403 audit(1736804131.165:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:35:31.739559 systemd[1]: Successfully loaded SELinux policy in 30.868ms. Jan 13 21:35:31.739579 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.138ms. Jan 13 21:35:31.739593 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:35:31.739605 systemd[1]: Detected virtualization kvm. Jan 13 21:35:31.739615 systemd[1]: Detected architecture arm64. Jan 13 21:35:31.739626 systemd[1]: Detected first boot. Jan 13 21:35:31.739636 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:35:31.739646 zram_generator::config[1040]: No configuration found. Jan 13 21:35:31.739658 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:35:31.739668 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:35:31.739680 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:35:31.739691 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:35:31.739702 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:35:31.739714 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:35:31.739724 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:35:31.739734 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:35:31.739745 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:35:31.739756 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:35:31.739768 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:35:31.739778 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:35:31.739789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:35:31.739800 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:35:31.739811 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:35:31.739822 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:35:31.739832 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:35:31.739843 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:35:31.739854 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:35:31.739866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:35:31.739877 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:35:31.739887 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:35:31.739898 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:35:31.739908 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:35:31.739919 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:35:31.739929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:35:31.739941 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:35:31.739953 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:35:31.739964 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:35:31.739975 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:35:31.739985 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:35:31.739996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:35:31.740006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:35:31.740017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:35:31.740028 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:35:31.740038 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:35:31.740050 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:35:31.740061 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:35:31.740071 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:35:31.740082 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:35:31.740093 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:35:31.740103 systemd[1]: Reached target machines.target - Containers. Jan 13 21:35:31.740114 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:35:31.740125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:35:31.740136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:35:31.740148 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:35:31.740159 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:35:31.740170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:35:31.740180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:35:31.740191 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:35:31.740202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:35:31.740212 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:35:31.740223 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:35:31.740333 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:35:31.740350 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:35:31.740360 kernel: fuse: init (API version 7.39) Jan 13 21:35:31.740370 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:35:31.740381 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:35:31.740391 kernel: ACPI: bus type drm_connector registered Jan 13 21:35:31.740401 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:35:31.740412 kernel: loop: module loaded Jan 13 21:35:31.740422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:35:31.740435 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:35:31.740447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:35:31.740458 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:35:31.740468 systemd[1]: Stopped verity-setup.service. Jan 13 21:35:31.740478 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:35:31.740489 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:35:31.740501 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:35:31.740511 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:35:31.740522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:35:31.740554 systemd-journald[1111]: Collecting audit messages is disabled. Jan 13 21:35:31.740574 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:35:31.740585 systemd-journald[1111]: Journal started Jan 13 21:35:31.740608 systemd-journald[1111]: Runtime Journal (/run/log/journal/38d271b40c6c44fa975aa3b6387c3eab) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:35:31.518696 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:35:31.543423 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:35:31.543765 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:35:31.743048 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:35:31.743811 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:35:31.745201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:35:31.746709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:35:31.746848 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:35:31.748310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:35:31.748454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:35:31.749803 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:35:31.749944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:35:31.752561 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:35:31.752702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:35:31.754118 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:35:31.754305 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:35:31.755823 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:35:31.755965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:35:31.757374 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:35:31.760292 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:35:31.761700 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:35:31.773522 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:35:31.784405 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:35:31.786505 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:35:31.787691 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:35:31.787723 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:35:31.789634 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:35:31.791873 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:35:31.793996 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:35:31.795154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:35:31.796664 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:35:31.798945 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:35:31.800259 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:35:31.803470 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:35:31.804774 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:35:31.806434 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:35:31.809476 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:35:31.810986 systemd-journald[1111]: Time spent on flushing to /var/log/journal/38d271b40c6c44fa975aa3b6387c3eab is 11.653ms for 854 entries. Jan 13 21:35:31.810986 systemd-journald[1111]: System Journal (/var/log/journal/38d271b40c6c44fa975aa3b6387c3eab) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:35:31.842584 systemd-journald[1111]: Received client request to flush runtime journal. Jan 13 21:35:31.842640 kernel: loop0: detected capacity change from 0 to 114432 Jan 13 21:35:31.813439 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:35:31.816313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:35:31.817681 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:35:31.820520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:35:31.822060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:35:31.826640 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:35:31.831122 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:35:31.835458 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:35:31.838803 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:35:31.845574 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:35:31.849378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:35:31.862267 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:35:31.868670 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:35:31.872316 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:35:31.875394 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:35:31.875966 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:35:31.889282 kernel: loop1: detected capacity change from 0 to 114328 Jan 13 21:35:31.891690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:35:31.909739 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 13 21:35:31.909758 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 13 21:35:31.913278 kernel: loop2: detected capacity change from 0 to 194096 Jan 13 21:35:31.913940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:35:31.967277 kernel: loop3: detected capacity change from 0 to 114432 Jan 13 21:35:31.972288 kernel: loop4: detected capacity change from 0 to 114328 Jan 13 21:35:31.980269 kernel: loop5: detected capacity change from 0 to 194096 Jan 13 21:35:31.990520 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:35:31.991771 (sd-merge)[1175]: Merged extensions into '/usr'. Jan 13 21:35:31.994960 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:35:31.994977 systemd[1]: Reloading... Jan 13 21:35:32.045365 zram_generator::config[1201]: No configuration found. Jan 13 21:35:32.139284 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:35:32.144278 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:35:32.174838 systemd[1]: Reloading finished in 179 ms. Jan 13 21:35:32.204665 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:35:32.206134 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:35:32.219463 systemd[1]: Starting ensure-sysext.service... Jan 13 21:35:32.221411 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:35:32.235013 systemd[1]: Reloading requested from client PID 1235 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:35:32.235032 systemd[1]: Reloading... Jan 13 21:35:32.241973 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:35:32.242287 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:35:32.242941 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:35:32.243153 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 13 21:35:32.243205 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 13 21:35:32.245499 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:35:32.245512 systemd-tmpfiles[1236]: Skipping /boot Jan 13 21:35:32.252736 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:35:32.252755 systemd-tmpfiles[1236]: Skipping /boot Jan 13 21:35:32.284286 zram_generator::config[1264]: No configuration found. Jan 13 21:35:32.364714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:35:32.400680 systemd[1]: Reloading finished in 165 ms. Jan 13 21:35:32.416297 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:35:32.424617 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:35:32.432521 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:35:32.434871 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:35:32.437189 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:35:32.440466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:35:32.444407 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:35:32.448795 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:35:32.454260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:35:32.455967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:35:32.460509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:35:32.463996 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:35:32.466553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:35:32.467289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:35:32.467473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:35:32.472587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:35:32.472721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:35:32.474492 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:35:32.474619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:35:32.480630 systemd-udevd[1305]: Using default interface naming scheme 'v255'. Jan 13 21:35:32.480957 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:35:32.486309 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:35:32.493113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:35:32.503530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:35:32.505749 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:35:32.510530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:35:32.515528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:35:32.519502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:35:32.521526 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:35:32.526394 augenrules[1341]: No rules Jan 13 21:35:32.528314 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:35:32.530233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:35:32.534310 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:35:32.536454 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:35:32.538577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:35:32.538803 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:35:32.540829 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:35:32.540962 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:35:32.542921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:35:32.544335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:35:32.546011 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:35:32.546142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:35:32.547834 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:35:32.553059 systemd[1]: Finished ensure-sysext.service. Jan 13 21:35:32.570470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:35:32.571514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:35:32.571595 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:35:32.573887 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:35:32.576633 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:35:32.576800 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:35:32.578640 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 21:35:32.592274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1335) Jan 13 21:35:32.636362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:35:32.650437 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:35:32.666082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:35:32.667622 systemd-resolved[1303]: Positive Trust Anchors: Jan 13 21:35:32.667662 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:35:32.667695 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:35:32.667847 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:35:32.675374 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:35:32.676683 systemd-networkd[1368]: lo: Link UP Jan 13 21:35:32.676695 systemd-networkd[1368]: lo: Gained carrier Jan 13 21:35:32.679095 systemd-resolved[1303]: Defaulting to hostname 'linux'. Jan 13 21:35:32.679246 systemd-networkd[1368]: Enumeration completed Jan 13 21:35:32.679355 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:35:32.685796 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:35:32.685805 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:35:32.686447 systemd-networkd[1368]: eth0: Link UP Jan 13 21:35:32.686550 systemd-networkd[1368]: eth0: Gained carrier Jan 13 21:35:32.686568 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:35:32.688466 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:35:32.689768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:35:32.691160 systemd[1]: Reached target network.target - Network. Jan 13 21:35:32.692117 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:35:32.705320 systemd-networkd[1368]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:35:32.705855 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 13 21:35:32.706419 systemd-timesyncd[1371]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:35:32.706468 systemd-timesyncd[1371]: Initial clock synchronization to Mon 2025-01-13 21:35:33.056553 UTC. Jan 13 21:35:32.719522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:35:32.726563 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:35:32.729292 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:35:32.746293 lvm[1393]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:35:32.764297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:35:32.789797 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:35:32.791328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:35:32.792460 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:35:32.793587 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:35:32.794804 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:35:32.796215 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:35:32.797416 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:35:32.798637 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:35:32.799989 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:35:32.800027 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:35:32.800917 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:35:32.802668 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:35:32.805060 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:35:32.816308 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:35:32.818507 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:35:32.820069 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:35:32.821286 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:35:32.822207 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:35:32.823198 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:35:32.823231 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:35:32.824152 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:35:32.826174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:35:32.826378 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:35:32.829132 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:35:32.831954 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:35:32.833518 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:35:32.837455 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:35:32.840940 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:35:32.844626 jq[1403]: false Jan 13 21:35:32.844415 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:35:32.848424 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:35:32.857392 extend-filesystems[1404]: Found loop3 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found loop4 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found loop5 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda1 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda2 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda3 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found usr Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda4 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda6 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda7 Jan 13 21:35:32.857392 extend-filesystems[1404]: Found vda9 Jan 13 21:35:32.857392 extend-filesystems[1404]: Checking size of /dev/vda9 Jan 13 21:35:32.872522 extend-filesystems[1404]: Resized partition /dev/vda9 Jan 13 21:35:32.858386 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:35:32.876190 extend-filesystems[1424]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:35:32.873938 dbus-daemon[1402]: [system] SELinux support is enabled Jan 13 21:35:32.865498 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:35:32.865957 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:35:32.873460 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:35:32.879277 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1352) Jan 13 21:35:32.880400 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:35:32.884513 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:35:32.889305 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:35:32.888661 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:35:32.897584 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:35:32.897786 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:35:32.898071 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:35:32.898269 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:35:32.902759 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:35:32.903404 jq[1425]: true Jan 13 21:35:32.902908 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:35:32.913974 update_engine[1422]: I20250113 21:35:32.913669 1422 main.cc:92] Flatcar Update Engine starting Jan 13 21:35:32.915948 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:35:32.921412 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:35:32.921456 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:35:32.926950 jq[1435]: true Jan 13 21:35:32.924472 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:35:32.924494 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:35:32.933286 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:35:32.934102 systemd-logind[1415]: New seat seat0. Jan 13 21:35:32.934561 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:35:32.934774 update_engine[1422]: I20250113 21:35:32.934609 1422 update_check_scheduler.cc:74] Next update check in 7m58s Jan 13 21:35:32.935767 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:35:32.943340 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:35:32.946254 tar[1428]: linux-arm64/helm Jan 13 21:35:32.946720 extend-filesystems[1424]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:35:32.946720 extend-filesystems[1424]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:35:32.946720 extend-filesystems[1424]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:35:32.953529 extend-filesystems[1404]: Resized filesystem in /dev/vda9 Jan 13 21:35:32.950383 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:35:32.951902 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:35:32.952095 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:35:32.997039 bash[1458]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:35:32.998524 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:35:33.005586 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:35:33.027220 locksmithd[1443]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:35:33.132253 containerd[1434]: time="2025-01-13T21:35:33.132164490Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:35:33.159434 containerd[1434]: time="2025-01-13T21:35:33.159390468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:35:33.160990 containerd[1434]: time="2025-01-13T21:35:33.160957637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161073 containerd[1434]: time="2025-01-13T21:35:33.160988742Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:35:33.161073 containerd[1434]: time="2025-01-13T21:35:33.161009283Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:35:33.161191 containerd[1434]: time="2025-01-13T21:35:33.161168439Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:35:33.161217 containerd[1434]: time="2025-01-13T21:35:33.161193072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161274 containerd[1434]: time="2025-01-13T21:35:33.161254863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161309 containerd[1434]: time="2025-01-13T21:35:33.161272107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161479 containerd[1434]: time="2025-01-13T21:35:33.161455686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161479 containerd[1434]: time="2025-01-13T21:35:33.161476144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161533 containerd[1434]: time="2025-01-13T21:35:33.161502280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161533 containerd[1434]: time="2025-01-13T21:35:33.161513553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161610 containerd[1434]: time="2025-01-13T21:35:33.161591336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161820 containerd[1434]: time="2025-01-13T21:35:33.161799757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161924 containerd[1434]: time="2025-01-13T21:35:33.161905220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:35:33.161924 containerd[1434]: time="2025-01-13T21:35:33.161922088Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:35:33.162011 containerd[1434]: time="2025-01-13T21:35:33.161994317Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:35:33.162065 containerd[1434]: time="2025-01-13T21:35:33.162042080Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:35:33.165256 containerd[1434]: time="2025-01-13T21:35:33.165223181Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:35:33.165310 containerd[1434]: time="2025-01-13T21:35:33.165273365Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:35:33.165332 containerd[1434]: time="2025-01-13T21:35:33.165308436Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:35:33.165332 containerd[1434]: time="2025-01-13T21:35:33.165324803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:35:33.165387 containerd[1434]: time="2025-01-13T21:35:33.165339791Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:35:33.165498 containerd[1434]: time="2025-01-13T21:35:33.165474856Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:35:33.165750 containerd[1434]: time="2025-01-13T21:35:33.165729037Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:35:33.165862 containerd[1434]: time="2025-01-13T21:35:33.165842475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:35:33.165886 containerd[1434]: time="2025-01-13T21:35:33.165864519Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:35:33.165886 containerd[1434]: time="2025-01-13T21:35:33.165877963Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:35:33.165928 containerd[1434]: time="2025-01-13T21:35:33.165891323Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.165928 containerd[1434]: time="2025-01-13T21:35:33.165904976Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.165928 containerd[1434]: time="2025-01-13T21:35:33.165917251Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.165995 containerd[1434]: time="2025-01-13T21:35:33.165934703Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.165995 containerd[1434]: time="2025-01-13T21:35:33.165949984Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.165995 containerd[1434]: time="2025-01-13T21:35:33.165961883Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.165995 containerd[1434]: time="2025-01-13T21:35:33.165973824Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.165995 containerd[1434]: time="2025-01-13T21:35:33.165986182Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166006306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166019917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166031774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166043590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166055656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166070978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166083963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166102 containerd[1434]: time="2025-01-13T21:35:33.166096113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166111226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166126006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166137738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166149220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166165085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166182955Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166203204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166215103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166256 containerd[1434]: time="2025-01-13T21:35:33.166235979Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:35:33.166443 containerd[1434]: time="2025-01-13T21:35:33.166366576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:35:33.166443 containerd[1434]: time="2025-01-13T21:35:33.166384362Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:35:33.166443 containerd[1434]: time="2025-01-13T21:35:33.166395426Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:35:33.166443 containerd[1434]: time="2025-01-13T21:35:33.166407492Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:35:33.166443 containerd[1434]: time="2025-01-13T21:35:33.166417262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.166443 containerd[1434]: time="2025-01-13T21:35:33.166428994Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:35:33.166443 containerd[1434]: time="2025-01-13T21:35:33.166439390Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:35:33.166570 containerd[1434]: time="2025-01-13T21:35:33.166449536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:35:33.167430 containerd[1434]: time="2025-01-13T21:35:33.166818198Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:35:33.167430 containerd[1434]: time="2025-01-13T21:35:33.166882411Z" level=info msg="Connect containerd service" Jan 13 21:35:33.167430 containerd[1434]: time="2025-01-13T21:35:33.166907587Z" level=info msg="using legacy CRI server" Jan 13 21:35:33.167430 containerd[1434]: time="2025-01-13T21:35:33.166914142Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:35:33.167430 containerd[1434]: time="2025-01-13T21:35:33.166994596Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:35:33.167668 containerd[1434]: time="2025-01-13T21:35:33.167638357Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:35:33.168277 containerd[1434]: time="2025-01-13T21:35:33.168191225Z" level=info msg="Start subscribing containerd event" Jan 13 21:35:33.169038 containerd[1434]: time="2025-01-13T21:35:33.168377476Z" level=info msg="Start recovering state" Jan 13 21:35:33.169038 containerd[1434]: time="2025-01-13T21:35:33.168458307Z" level=info msg="Start event monitor" Jan 13 21:35:33.169038 containerd[1434]: time="2025-01-13T21:35:33.168472544Z" level=info msg="Start snapshots syncer" Jan 13 21:35:33.169038 containerd[1434]: time="2025-01-13T21:35:33.168487157Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:35:33.169038 containerd[1434]: time="2025-01-13T21:35:33.168496008Z" level=info msg="Start streaming server" Jan 13 21:35:33.169461 containerd[1434]: time="2025-01-13T21:35:33.169429479Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:35:33.169570 containerd[1434]: time="2025-01-13T21:35:33.169554565Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:35:33.169712 containerd[1434]: time="2025-01-13T21:35:33.169695475Z" level=info msg="containerd successfully booted in 0.039843s" Jan 13 21:35:33.169797 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:35:33.235026 sshd_keygen[1423]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:35:33.254660 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:35:33.266666 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:35:33.272562 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:35:33.272914 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:35:33.275853 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:35:33.290335 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:35:33.291567 tar[1428]: linux-arm64/LICENSE Jan 13 21:35:33.291639 tar[1428]: linux-arm64/README.md Jan 13 21:35:33.308099 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:35:33.310463 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:35:33.311945 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:35:33.314310 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:35:34.478293 systemd-networkd[1368]: eth0: Gained IPv6LL Jan 13 21:35:34.482331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:35:34.484247 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:35:34.498559 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:35:34.502164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:35:34.506335 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:35:34.524655 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:35:34.524996 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:35:34.527669 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:35:34.535024 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:35:35.045082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:35:35.046745 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:35:35.049369 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:35:35.052831 systemd[1]: Startup finished in 559ms (kernel) + 4.458s (initrd) + 3.921s (userspace) = 8.939s. Jan 13 21:35:35.539846 kubelet[1517]: E0113 21:35:35.539629 1517 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:35:35.544055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:35:35.544204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:35:39.978030 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:35:39.979579 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:34154.service - OpenSSH per-connection server daemon (10.0.0.1:34154). Jan 13 21:35:40.032905 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 34154 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:35:40.036775 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:35:40.048465 systemd-logind[1415]: New session 1 of user core. Jan 13 21:35:40.048884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:35:40.067986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:35:40.081403 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:35:40.091855 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:35:40.095233 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:35:40.180868 systemd[1535]: Queued start job for default target default.target. Jan 13 21:35:40.189196 systemd[1535]: Created slice app.slice - User Application Slice. Jan 13 21:35:40.189243 systemd[1535]: Reached target paths.target - Paths. Jan 13 21:35:40.189284 systemd[1535]: Reached target timers.target - Timers. Jan 13 21:35:40.190903 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:35:40.200423 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:35:40.200476 systemd[1535]: Reached target sockets.target - Sockets. Jan 13 21:35:40.200488 systemd[1535]: Reached target basic.target - Basic System. Jan 13 21:35:40.200522 systemd[1535]: Reached target default.target - Main User Target. Jan 13 21:35:40.200546 systemd[1535]: Startup finished in 99ms. Jan 13 21:35:40.200904 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:35:40.202532 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:35:40.304897 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:34160.service - OpenSSH per-connection server daemon (10.0.0.1:34160). Jan 13 21:35:40.348507 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 34160 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:35:40.349819 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:35:40.354516 systemd-logind[1415]: New session 2 of user core. Jan 13 21:35:40.369850 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:35:40.421316 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 13 21:35:40.437469 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:34160.service: Deactivated successfully. Jan 13 21:35:40.439476 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:35:40.440713 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:35:40.451544 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:34176.service - OpenSSH per-connection server daemon (10.0.0.1:34176). Jan 13 21:35:40.455709 systemd-logind[1415]: Removed session 2. Jan 13 21:35:40.506356 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 34176 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:35:40.507566 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:35:40.511319 systemd-logind[1415]: New session 3 of user core. Jan 13 21:35:40.520389 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:35:40.568800 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 13 21:35:40.577562 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:34176.service: Deactivated successfully. Jan 13 21:35:40.579187 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:35:40.581558 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:35:40.591881 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:34190.service - OpenSSH per-connection server daemon (10.0.0.1:34190). Jan 13 21:35:40.593190 systemd-logind[1415]: Removed session 3. Jan 13 21:35:40.624233 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 34190 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:35:40.625382 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:35:40.628769 systemd-logind[1415]: New session 4 of user core. Jan 13 21:35:40.641485 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:35:40.693555 sshd[1560]: pam_unix(sshd:session): session closed for user core Jan 13 21:35:40.706589 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:34190.service: Deactivated successfully. Jan 13 21:35:40.708169 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:35:40.709386 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:35:40.710522 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:34196.service - OpenSSH per-connection server daemon (10.0.0.1:34196). Jan 13 21:35:40.711195 systemd-logind[1415]: Removed session 4. Jan 13 21:35:40.746878 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 34196 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:35:40.748108 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:35:40.752466 systemd-logind[1415]: New session 5 of user core. Jan 13 21:35:40.760453 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:35:40.825919 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:35:40.826190 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:35:40.839230 sudo[1570]: pam_unix(sudo:session): session closed for user root Jan 13 21:35:40.840802 sshd[1567]: pam_unix(sshd:session): session closed for user core Jan 13 21:35:40.860724 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:34196.service: Deactivated successfully. Jan 13 21:35:40.862222 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:35:40.863507 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:35:40.864714 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:34212.service - OpenSSH per-connection server daemon (10.0.0.1:34212). Jan 13 21:35:40.867448 systemd-logind[1415]: Removed session 5. Jan 13 21:35:40.902669 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 34212 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:35:40.903955 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:35:40.907699 systemd-logind[1415]: New session 6 of user core. Jan 13 21:35:40.914396 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:35:40.966428 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:35:40.966698 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:35:40.969529 sudo[1579]: pam_unix(sudo:session): session closed for user root Jan 13 21:35:40.973926 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:35:40.974197 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:35:40.989692 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:35:40.990654 auditctl[1582]: No rules Jan 13 21:35:40.991038 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:35:40.991184 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:35:40.993510 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:35:41.016002 augenrules[1600]: No rules Jan 13 21:35:41.017166 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:35:41.018220 sudo[1578]: pam_unix(sudo:session): session closed for user root Jan 13 21:35:41.019831 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 13 21:35:41.031529 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:34212.service: Deactivated successfully. Jan 13 21:35:41.032967 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:35:41.035323 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:35:41.036434 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:34218.service - OpenSSH per-connection server daemon (10.0.0.1:34218). Jan 13 21:35:41.037131 systemd-logind[1415]: Removed session 6. Jan 13 21:35:41.079582 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 34218 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:35:41.081307 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:35:41.084865 systemd-logind[1415]: New session 7 of user core. Jan 13 21:35:41.092418 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:35:41.143247 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:35:41.143618 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:35:41.477618 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:35:41.477629 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:35:41.741319 dockerd[1629]: time="2025-01-13T21:35:41.741181264Z" level=info msg="Starting up" Jan 13 21:35:41.889857 dockerd[1629]: time="2025-01-13T21:35:41.889810264Z" level=info msg="Loading containers: start." Jan 13 21:35:41.977282 kernel: Initializing XFRM netlink socket Jan 13 21:35:42.048378 systemd-networkd[1368]: docker0: Link UP Jan 13 21:35:42.066108 dockerd[1629]: time="2025-01-13T21:35:42.065470537Z" level=info msg="Loading containers: done." Jan 13 21:35:42.080999 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck58051930-merged.mount: Deactivated successfully. Jan 13 21:35:42.083761 dockerd[1629]: time="2025-01-13T21:35:42.083720747Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:35:42.083860 dockerd[1629]: time="2025-01-13T21:35:42.083818334Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:35:42.083954 dockerd[1629]: time="2025-01-13T21:35:42.083926419Z" level=info msg="Daemon has completed initialization" Jan 13 21:35:42.114118 dockerd[1629]: time="2025-01-13T21:35:42.113827709Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:35:42.114092 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:35:42.810554 containerd[1434]: time="2025-01-13T21:35:42.810454282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:35:43.621053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4094317245.mount: Deactivated successfully. Jan 13 21:35:45.631003 containerd[1434]: time="2025-01-13T21:35:45.630942519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:45.632250 containerd[1434]: time="2025-01-13T21:35:45.632196963Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864012" Jan 13 21:35:45.632852 containerd[1434]: time="2025-01-13T21:35:45.632816014Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:45.635878 containerd[1434]: time="2025-01-13T21:35:45.635838674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:45.636907 containerd[1434]: time="2025-01-13T21:35:45.636872308Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.826373125s" Jan 13 21:35:45.636941 containerd[1434]: time="2025-01-13T21:35:45.636908868Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 13 21:35:45.655967 containerd[1434]: time="2025-01-13T21:35:45.655932786Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:35:45.794675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:35:45.804418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:35:45.900017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:35:45.903670 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:35:45.947198 kubelet[1857]: E0113 21:35:45.947143 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:35:45.950501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:35:45.950656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:35:48.322108 containerd[1434]: time="2025-01-13T21:35:48.322061274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:48.322760 containerd[1434]: time="2025-01-13T21:35:48.322725856Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900696" Jan 13 21:35:48.323923 containerd[1434]: time="2025-01-13T21:35:48.323887679Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:48.328518 containerd[1434]: time="2025-01-13T21:35:48.328479444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:48.329776 containerd[1434]: time="2025-01-13T21:35:48.329634788Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.673659545s" Jan 13 21:35:48.329776 containerd[1434]: time="2025-01-13T21:35:48.329676996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 13 21:35:48.347272 containerd[1434]: time="2025-01-13T21:35:48.347196518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:35:49.859959 containerd[1434]: time="2025-01-13T21:35:49.859899128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:49.860886 containerd[1434]: time="2025-01-13T21:35:49.860665589Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164334" Jan 13 21:35:49.861673 containerd[1434]: time="2025-01-13T21:35:49.861617242Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:49.864948 containerd[1434]: time="2025-01-13T21:35:49.864892002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:49.866387 containerd[1434]: time="2025-01-13T21:35:49.866301650Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.519071904s" Jan 13 21:35:49.866387 containerd[1434]: time="2025-01-13T21:35:49.866334539Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 13 21:35:49.884138 containerd[1434]: time="2025-01-13T21:35:49.884108294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:35:50.930045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268007063.mount: Deactivated successfully. Jan 13 21:35:51.155718 containerd[1434]: time="2025-01-13T21:35:51.155666256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:51.156567 containerd[1434]: time="2025-01-13T21:35:51.156530784Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Jan 13 21:35:51.157233 containerd[1434]: time="2025-01-13T21:35:51.157169541Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:51.159262 containerd[1434]: time="2025-01-13T21:35:51.159204764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:51.160083 containerd[1434]: time="2025-01-13T21:35:51.160017206Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.275872858s" Jan 13 21:35:51.160083 containerd[1434]: time="2025-01-13T21:35:51.160079251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 21:35:51.178074 containerd[1434]: time="2025-01-13T21:35:51.178042697Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:35:51.767873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505000356.mount: Deactivated successfully. Jan 13 21:35:52.696689 containerd[1434]: time="2025-01-13T21:35:52.696633054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:52.697802 containerd[1434]: time="2025-01-13T21:35:52.697762752Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 21:35:52.698807 containerd[1434]: time="2025-01-13T21:35:52.698752447Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:52.701746 containerd[1434]: time="2025-01-13T21:35:52.701713906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:52.702935 containerd[1434]: time="2025-01-13T21:35:52.702898073Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.524817311s" Jan 13 21:35:52.702973 containerd[1434]: time="2025-01-13T21:35:52.702936846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:35:52.722192 containerd[1434]: time="2025-01-13T21:35:52.722148901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:35:53.201016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084200641.mount: Deactivated successfully. Jan 13 21:35:53.205093 containerd[1434]: time="2025-01-13T21:35:53.204800257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:53.206039 containerd[1434]: time="2025-01-13T21:35:53.205855444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 21:35:53.206752 containerd[1434]: time="2025-01-13T21:35:53.206723265Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:53.209287 containerd[1434]: time="2025-01-13T21:35:53.209253869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:53.210118 containerd[1434]: time="2025-01-13T21:35:53.210095090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 487.911594ms" Jan 13 21:35:53.210196 containerd[1434]: time="2025-01-13T21:35:53.210122092Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 21:35:53.228544 containerd[1434]: time="2025-01-13T21:35:53.228514890Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:35:53.777267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159012990.mount: Deactivated successfully. Jan 13 21:35:56.201073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:35:56.210409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:35:56.294011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:35:56.297666 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:35:56.339806 kubelet[2008]: E0113 21:35:56.339754 2008 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:35:56.344430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:35:56.344591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:35:57.607860 containerd[1434]: time="2025-01-13T21:35:57.607797887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:57.608544 containerd[1434]: time="2025-01-13T21:35:57.608490876Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 13 21:35:57.609108 containerd[1434]: time="2025-01-13T21:35:57.609073989Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:57.612159 containerd[1434]: time="2025-01-13T21:35:57.612107568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:35:57.613527 containerd[1434]: time="2025-01-13T21:35:57.613485531Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.384934546s" Jan 13 21:35:57.613570 containerd[1434]: time="2025-01-13T21:35:57.613528326Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 13 21:36:04.109176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:36:04.125662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:36:04.142029 systemd[1]: Reloading requested from client PID 2103 ('systemctl') (unit session-7.scope)... Jan 13 21:36:04.142045 systemd[1]: Reloading... Jan 13 21:36:04.209266 zram_generator::config[2142]: No configuration found. Jan 13 21:36:04.327321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:36:04.379747 systemd[1]: Reloading finished in 237 ms. Jan 13 21:36:04.417509 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:36:04.420925 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:36:04.421112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:36:04.422545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:36:04.525444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:36:04.529434 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:36:04.568079 kubelet[2189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:36:04.568079 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:36:04.568079 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:36:04.568964 kubelet[2189]: I0113 21:36:04.568911 2189 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:36:05.310543 kubelet[2189]: I0113 21:36:05.310506 2189 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:36:05.311570 kubelet[2189]: I0113 21:36:05.310678 2189 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:36:05.311570 kubelet[2189]: I0113 21:36:05.310890 2189 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:36:05.337075 kubelet[2189]: E0113 21:36:05.337017 2189 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.337178 kubelet[2189]: I0113 21:36:05.337135 2189 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:36:05.347741 kubelet[2189]: I0113 21:36:05.347706 2189 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:36:05.349273 kubelet[2189]: I0113 21:36:05.349130 2189 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:36:05.349362 kubelet[2189]: I0113 21:36:05.349177 2189 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:36:05.349439 kubelet[2189]: I0113 21:36:05.349434 2189 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:36:05.349462 kubelet[2189]: I0113 21:36:05.349444 2189 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:36:05.349707 kubelet[2189]: I0113 21:36:05.349687 2189 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:36:05.350783 kubelet[2189]: I0113 21:36:05.350720 2189 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:36:05.350783 kubelet[2189]: I0113 21:36:05.350741 2189 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:36:05.351564 kubelet[2189]: I0113 21:36:05.351046 2189 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:36:05.351564 kubelet[2189]: I0113 21:36:05.351126 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:36:05.352029 kubelet[2189]: W0113 21:36:05.351804 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.352029 kubelet[2189]: W0113 21:36:05.351834 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.352029 kubelet[2189]: E0113 21:36:05.351866 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.352029 kubelet[2189]: E0113 21:36:05.351876 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.352528 kubelet[2189]: I0113 21:36:05.352422 2189 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:36:05.352887 kubelet[2189]: I0113 21:36:05.352874 2189 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:36:05.353064 kubelet[2189]: W0113 21:36:05.353052 2189 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:36:05.353974 kubelet[2189]: I0113 21:36:05.353955 2189 server.go:1264] "Started kubelet" Jan 13 21:36:05.355511 kubelet[2189]: I0113 21:36:05.355094 2189 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:36:05.355511 kubelet[2189]: I0113 21:36:05.355368 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:36:05.356129 kubelet[2189]: I0113 21:36:05.356093 2189 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:36:05.356984 kubelet[2189]: I0113 21:36:05.356912 2189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:36:05.357235 kubelet[2189]: I0113 21:36:05.357107 2189 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:36:05.360692 kubelet[2189]: E0113 21:36:05.360185 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5e2fb8570a67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:36:05.353933415 +0000 UTC m=+0.821594240,LastTimestamp:2025-01-13 21:36:05.353933415 +0000 UTC m=+0.821594240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:36:05.360692 kubelet[2189]: E0113 21:36:05.360419 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:05.360692 kubelet[2189]: I0113 21:36:05.360532 2189 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:36:05.360692 kubelet[2189]: I0113 21:36:05.360596 2189 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:36:05.360692 kubelet[2189]: I0113 21:36:05.360662 2189 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:36:05.361444 kubelet[2189]: W0113 21:36:05.361400 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.361621 kubelet[2189]: E0113 21:36:05.361605 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.361732 kubelet[2189]: I0113 21:36:05.361550 2189 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:36:05.361858 kubelet[2189]: E0113 21:36:05.361824 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Jan 13 21:36:05.361968 kubelet[2189]: I0113 21:36:05.361952 2189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:36:05.362631 kubelet[2189]: E0113 21:36:05.362592 2189 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:36:05.363324 kubelet[2189]: I0113 21:36:05.363300 2189 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:36:05.368623 kubelet[2189]: I0113 21:36:05.368486 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:36:05.369990 kubelet[2189]: I0113 21:36:05.369463 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:36:05.369990 kubelet[2189]: I0113 21:36:05.369610 2189 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:36:05.369990 kubelet[2189]: I0113 21:36:05.369626 2189 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:36:05.369990 kubelet[2189]: E0113 21:36:05.369670 2189 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:36:05.375570 kubelet[2189]: W0113 21:36:05.375516 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.375570 kubelet[2189]: E0113 21:36:05.375571 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:05.376443 kubelet[2189]: I0113 21:36:05.376429 2189 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:36:05.376584 kubelet[2189]: I0113 21:36:05.376574 2189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:36:05.376660 kubelet[2189]: I0113 21:36:05.376651 2189 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:36:05.452987 kubelet[2189]: I0113 21:36:05.452959 2189 policy_none.go:49] "None policy: Start" Jan 13 21:36:05.453833 kubelet[2189]: I0113 21:36:05.453812 2189 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:36:05.453908 kubelet[2189]: I0113 21:36:05.453842 2189 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:36:05.459381 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:36:05.461924 kubelet[2189]: I0113 21:36:05.461899 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:36:05.462247 kubelet[2189]: E0113 21:36:05.462211 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 21:36:05.470437 kubelet[2189]: E0113 21:36:05.470411 2189 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:36:05.476919 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:36:05.481385 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:36:05.491993 kubelet[2189]: I0113 21:36:05.491916 2189 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:36:05.492339 kubelet[2189]: I0113 21:36:05.492099 2189 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:36:05.492339 kubelet[2189]: I0113 21:36:05.492217 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:36:05.494451 kubelet[2189]: E0113 21:36:05.494430 2189 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:36:05.563010 kubelet[2189]: E0113 21:36:05.562896 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Jan 13 21:36:05.663813 kubelet[2189]: I0113 21:36:05.663770 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:36:05.664302 kubelet[2189]: E0113 21:36:05.664267 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 21:36:05.671482 kubelet[2189]: I0113 21:36:05.671434 2189 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:36:05.672290 kubelet[2189]: I0113 21:36:05.672265 2189 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:36:05.672885 kubelet[2189]: I0113 21:36:05.672863 2189 topology_manager.go:215] "Topology Admit Handler" podUID="a92ef1260efb27134f73841b7103d125" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:36:05.680714 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 21:36:05.710827 systemd[1]: Created slice kubepods-burstable-poda92ef1260efb27134f73841b7103d125.slice - libcontainer container kubepods-burstable-poda92ef1260efb27134f73841b7103d125.slice. Jan 13 21:36:05.714302 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 21:36:05.761898 kubelet[2189]: I0113 21:36:05.761855 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a92ef1260efb27134f73841b7103d125-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a92ef1260efb27134f73841b7103d125\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:36:05.761898 kubelet[2189]: I0113 21:36:05.761899 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a92ef1260efb27134f73841b7103d125-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a92ef1260efb27134f73841b7103d125\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:36:05.762041 kubelet[2189]: I0113 21:36:05.761920 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:05.762041 kubelet[2189]: I0113 21:36:05.761960 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:05.762041 kubelet[2189]: I0113 21:36:05.762012 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:05.762041 kubelet[2189]: I0113 21:36:05.762030 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a92ef1260efb27134f73841b7103d125-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a92ef1260efb27134f73841b7103d125\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:36:05.762121 kubelet[2189]: I0113 21:36:05.762047 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:05.762121 kubelet[2189]: I0113 21:36:05.762066 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:05.762121 kubelet[2189]: I0113 21:36:05.762081 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:36:05.964191 kubelet[2189]: E0113 21:36:05.964080 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Jan 13 21:36:06.008532 kubelet[2189]: E0113 21:36:06.008445 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:06.009093 containerd[1434]: time="2025-01-13T21:36:06.009054786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 21:36:06.013416 kubelet[2189]: E0113 21:36:06.013350 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:06.013876 containerd[1434]: time="2025-01-13T21:36:06.013843809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a92ef1260efb27134f73841b7103d125,Namespace:kube-system,Attempt:0,}" Jan 13 21:36:06.017085 kubelet[2189]: E0113 21:36:06.016993 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:06.021755 containerd[1434]: time="2025-01-13T21:36:06.021554149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 21:36:06.065420 kubelet[2189]: I0113 21:36:06.065394 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:36:06.065748 kubelet[2189]: E0113 21:36:06.065725 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 21:36:06.343366 kubelet[2189]: W0113 21:36:06.343192 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:06.343366 kubelet[2189]: E0113 21:36:06.343277 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:06.526257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336644461.mount: Deactivated successfully. Jan 13 21:36:06.563660 containerd[1434]: time="2025-01-13T21:36:06.563614817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:36:06.564702 containerd[1434]: time="2025-01-13T21:36:06.564640488Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:36:06.565359 containerd[1434]: time="2025-01-13T21:36:06.565340121Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:36:06.565949 containerd[1434]: time="2025-01-13T21:36:06.565921815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:36:06.566657 containerd[1434]: time="2025-01-13T21:36:06.566611800Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:36:06.567309 containerd[1434]: time="2025-01-13T21:36:06.567259109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:36:06.567963 containerd[1434]: time="2025-01-13T21:36:06.567931199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:36:06.569382 containerd[1434]: time="2025-01-13T21:36:06.569314533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:36:06.572332 containerd[1434]: time="2025-01-13T21:36:06.572290097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.376028ms" Jan 13 21:36:06.573675 containerd[1434]: time="2025-01-13T21:36:06.573453644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.833919ms" Jan 13 21:36:06.576219 containerd[1434]: time="2025-01-13T21:36:06.576111458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.978365ms" Jan 13 21:36:06.711533 containerd[1434]: time="2025-01-13T21:36:06.711361110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:36:06.711533 containerd[1434]: time="2025-01-13T21:36:06.711428167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:36:06.711533 containerd[1434]: time="2025-01-13T21:36:06.711446182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:06.711670 containerd[1434]: time="2025-01-13T21:36:06.711528292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:06.713156 containerd[1434]: time="2025-01-13T21:36:06.712950458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:36:06.713156 containerd[1434]: time="2025-01-13T21:36:06.713009789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:36:06.713156 containerd[1434]: time="2025-01-13T21:36:06.713029085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:06.714667 containerd[1434]: time="2025-01-13T21:36:06.714311493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:06.719298 containerd[1434]: time="2025-01-13T21:36:06.719095992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:36:06.719298 containerd[1434]: time="2025-01-13T21:36:06.719144993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:36:06.719298 containerd[1434]: time="2025-01-13T21:36:06.719167292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:06.719524 containerd[1434]: time="2025-01-13T21:36:06.719460261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:06.733433 systemd[1]: Started cri-containerd-eaaedc96400d0c1f21a8e7e48db2c4c5d15f442633c70c0ef772835c15525e75.scope - libcontainer container eaaedc96400d0c1f21a8e7e48db2c4c5d15f442633c70c0ef772835c15525e75. Jan 13 21:36:06.738011 systemd[1]: Started cri-containerd-6610efa9cc509a48beca9cfc9a9a55c26daf4f43355dbe3c9de10c9bb233339f.scope - libcontainer container 6610efa9cc509a48beca9cfc9a9a55c26daf4f43355dbe3c9de10c9bb233339f. Jan 13 21:36:06.739449 systemd[1]: Started cri-containerd-c1e60b7f6d8f90d3fa36c3745e0cb67c26e03902baf6cc24589eea08e878421f.scope - libcontainer container c1e60b7f6d8f90d3fa36c3745e0cb67c26e03902baf6cc24589eea08e878421f. Jan 13 21:36:06.764794 kubelet[2189]: E0113 21:36:06.764729 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="1.6s" Jan 13 21:36:06.768519 containerd[1434]: time="2025-01-13T21:36:06.768481165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a92ef1260efb27134f73841b7103d125,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaaedc96400d0c1f21a8e7e48db2c4c5d15f442633c70c0ef772835c15525e75\"" Jan 13 21:36:06.769855 kubelet[2189]: E0113 21:36:06.769796 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:06.772670 containerd[1434]: time="2025-01-13T21:36:06.772535484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6610efa9cc509a48beca9cfc9a9a55c26daf4f43355dbe3c9de10c9bb233339f\"" Jan 13 21:36:06.774342 containerd[1434]: time="2025-01-13T21:36:06.774228520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1e60b7f6d8f90d3fa36c3745e0cb67c26e03902baf6cc24589eea08e878421f\"" Jan 13 21:36:06.774464 kubelet[2189]: E0113 21:36:06.774300 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:06.774538 containerd[1434]: time="2025-01-13T21:36:06.774508238Z" level=info msg="CreateContainer within sandbox \"eaaedc96400d0c1f21a8e7e48db2c4c5d15f442633c70c0ef772835c15525e75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:36:06.775495 kubelet[2189]: E0113 21:36:06.775473 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:06.776586 containerd[1434]: time="2025-01-13T21:36:06.776558977Z" level=info msg="CreateContainer within sandbox \"6610efa9cc509a48beca9cfc9a9a55c26daf4f43355dbe3c9de10c9bb233339f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:36:06.777845 containerd[1434]: time="2025-01-13T21:36:06.777798068Z" level=info msg="CreateContainer within sandbox \"c1e60b7f6d8f90d3fa36c3745e0cb67c26e03902baf6cc24589eea08e878421f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:36:06.794219 containerd[1434]: time="2025-01-13T21:36:06.794168555Z" level=info msg="CreateContainer within sandbox \"c1e60b7f6d8f90d3fa36c3745e0cb67c26e03902baf6cc24589eea08e878421f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"439cfb32dbe6679cbfdb452c9748b5c35f595c6bcb217a498497df2e0c3961bb\"" Jan 13 21:36:06.795065 containerd[1434]: time="2025-01-13T21:36:06.795037853Z" level=info msg="StartContainer for \"439cfb32dbe6679cbfdb452c9748b5c35f595c6bcb217a498497df2e0c3961bb\"" Jan 13 21:36:06.795631 containerd[1434]: time="2025-01-13T21:36:06.795601131Z" level=info msg="CreateContainer within sandbox \"6610efa9cc509a48beca9cfc9a9a55c26daf4f43355dbe3c9de10c9bb233339f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"af62ec606425241944d6c5fb9604f0d4e7323d4da68cd8fe2be755b66dfd5131\"" Jan 13 21:36:06.795943 containerd[1434]: time="2025-01-13T21:36:06.795912715Z" level=info msg="StartContainer for \"af62ec606425241944d6c5fb9604f0d4e7323d4da68cd8fe2be755b66dfd5131\"" Jan 13 21:36:06.798533 containerd[1434]: time="2025-01-13T21:36:06.798158820Z" level=info msg="CreateContainer within sandbox \"eaaedc96400d0c1f21a8e7e48db2c4c5d15f442633c70c0ef772835c15525e75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f0ecabf2de48d7358a964723e8aa70870c98d5665b1a0ff606596335c30e1a9\"" Jan 13 21:36:06.798631 containerd[1434]: time="2025-01-13T21:36:06.798588185Z" level=info msg="StartContainer for \"2f0ecabf2de48d7358a964723e8aa70870c98d5665b1a0ff606596335c30e1a9\"" Jan 13 21:36:06.815598 kubelet[2189]: W0113 21:36:06.815539 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:06.815598 kubelet[2189]: E0113 21:36:06.815600 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:06.821386 systemd[1]: Started cri-containerd-439cfb32dbe6679cbfdb452c9748b5c35f595c6bcb217a498497df2e0c3961bb.scope - libcontainer container 439cfb32dbe6679cbfdb452c9748b5c35f595c6bcb217a498497df2e0c3961bb. Jan 13 21:36:06.822494 systemd[1]: Started cri-containerd-af62ec606425241944d6c5fb9604f0d4e7323d4da68cd8fe2be755b66dfd5131.scope - libcontainer container af62ec606425241944d6c5fb9604f0d4e7323d4da68cd8fe2be755b66dfd5131. Jan 13 21:36:06.826026 systemd[1]: Started cri-containerd-2f0ecabf2de48d7358a964723e8aa70870c98d5665b1a0ff606596335c30e1a9.scope - libcontainer container 2f0ecabf2de48d7358a964723e8aa70870c98d5665b1a0ff606596335c30e1a9. Jan 13 21:36:06.856292 containerd[1434]: time="2025-01-13T21:36:06.856233565Z" level=info msg="StartContainer for \"439cfb32dbe6679cbfdb452c9748b5c35f595c6bcb217a498497df2e0c3961bb\" returns successfully" Jan 13 21:36:06.860089 containerd[1434]: time="2025-01-13T21:36:06.860039313Z" level=info msg="StartContainer for \"af62ec606425241944d6c5fb9604f0d4e7323d4da68cd8fe2be755b66dfd5131\" returns successfully" Jan 13 21:36:06.867654 kubelet[2189]: I0113 21:36:06.867575 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:36:06.868353 kubelet[2189]: E0113 21:36:06.868287 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 21:36:06.869959 kubelet[2189]: W0113 21:36:06.869860 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:06.869959 kubelet[2189]: E0113 21:36:06.869942 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:06.870786 containerd[1434]: time="2025-01-13T21:36:06.870748598Z" level=info msg="StartContainer for \"2f0ecabf2de48d7358a964723e8aa70870c98d5665b1a0ff606596335c30e1a9\" returns successfully" Jan 13 21:36:06.947136 kubelet[2189]: W0113 21:36:06.947076 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:06.947136 kubelet[2189]: E0113 21:36:06.947141 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 21:36:07.383283 kubelet[2189]: E0113 21:36:07.382583 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:07.385823 kubelet[2189]: E0113 21:36:07.385795 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:07.386336 kubelet[2189]: E0113 21:36:07.386190 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:08.406515 kubelet[2189]: E0113 21:36:08.405957 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:08.433744 kubelet[2189]: E0113 21:36:08.433688 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:36:08.469455 kubelet[2189]: I0113 21:36:08.469424 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:36:08.683526 kubelet[2189]: I0113 21:36:08.683381 2189 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:36:08.693799 kubelet[2189]: E0113 21:36:08.693751 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:08.794761 kubelet[2189]: E0113 21:36:08.794709 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:08.895809 kubelet[2189]: E0113 21:36:08.895760 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:08.996399 kubelet[2189]: E0113 21:36:08.996296 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.096915 kubelet[2189]: E0113 21:36:09.096870 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.197440 kubelet[2189]: E0113 21:36:09.197401 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.268060 kubelet[2189]: E0113 21:36:09.267967 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:09.298132 kubelet[2189]: E0113 21:36:09.298075 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.398909 kubelet[2189]: E0113 21:36:09.398871 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.499810 kubelet[2189]: E0113 21:36:09.499773 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.600602 kubelet[2189]: E0113 21:36:09.600466 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.701223 kubelet[2189]: E0113 21:36:09.701172 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.801923 kubelet[2189]: E0113 21:36:09.801877 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:09.880999 kubelet[2189]: E0113 21:36:09.880896 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:09.903008 kubelet[2189]: E0113 21:36:09.902970 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:10.003574 kubelet[2189]: E0113 21:36:10.003535 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:10.355771 kubelet[2189]: I0113 21:36:10.355658 2189 apiserver.go:52] "Watching apiserver" Jan 13 21:36:10.361219 kubelet[2189]: I0113 21:36:10.361187 2189 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:36:10.498840 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-7.scope)... Jan 13 21:36:10.498858 systemd[1]: Reloading... Jan 13 21:36:10.561284 zram_generator::config[2508]: No configuration found. Jan 13 21:36:10.642729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:36:10.705808 systemd[1]: Reloading finished in 206 ms. Jan 13 21:36:10.738612 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:36:10.748077 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:36:10.749312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:36:10.749370 systemd[1]: kubelet.service: Consumed 1.138s CPU time, 113.9M memory peak, 0B memory swap peak. Jan 13 21:36:10.760552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:36:10.848654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:36:10.852080 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:36:10.889964 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:36:10.889964 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:36:10.889964 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:36:10.890328 kubelet[2550]: I0113 21:36:10.890005 2550 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:36:10.894013 kubelet[2550]: I0113 21:36:10.893936 2550 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:36:10.894013 kubelet[2550]: I0113 21:36:10.893960 2550 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:36:10.894555 kubelet[2550]: I0113 21:36:10.894522 2550 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:36:10.895839 kubelet[2550]: I0113 21:36:10.895814 2550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:36:10.897092 kubelet[2550]: I0113 21:36:10.897060 2550 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:36:10.903348 kubelet[2550]: I0113 21:36:10.903321 2550 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:36:10.903538 kubelet[2550]: I0113 21:36:10.903512 2550 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:36:10.903734 kubelet[2550]: I0113 21:36:10.903536 2550 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:36:10.903830 kubelet[2550]: I0113 21:36:10.903747 2550 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:36:10.903830 kubelet[2550]: I0113 21:36:10.903759 2550 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:36:10.903830 kubelet[2550]: I0113 21:36:10.903804 2550 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:36:10.903926 kubelet[2550]: I0113 21:36:10.903903 2550 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:36:10.903926 kubelet[2550]: I0113 21:36:10.903920 2550 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:36:10.904143 kubelet[2550]: I0113 21:36:10.903987 2550 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:36:10.904181 kubelet[2550]: I0113 21:36:10.904006 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:36:10.904984 kubelet[2550]: I0113 21:36:10.904944 2550 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:36:10.905167 kubelet[2550]: I0113 21:36:10.905151 2550 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:36:10.905558 kubelet[2550]: I0113 21:36:10.905540 2550 server.go:1264] "Started kubelet" Jan 13 21:36:10.912264 kubelet[2550]: I0113 21:36:10.911396 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:36:10.912264 kubelet[2550]: I0113 21:36:10.911648 2550 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:36:10.912264 kubelet[2550]: I0113 21:36:10.907281 2550 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:36:10.912406 kubelet[2550]: I0113 21:36:10.912387 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:36:10.913667 kubelet[2550]: E0113 21:36:10.913626 2550 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:36:10.913667 kubelet[2550]: I0113 21:36:10.913670 2550 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:36:10.913758 kubelet[2550]: I0113 21:36:10.913749 2550 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:36:10.913890 kubelet[2550]: I0113 21:36:10.913865 2550 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:36:10.914755 kubelet[2550]: I0113 21:36:10.914732 2550 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:36:10.925000 kubelet[2550]: I0113 21:36:10.924969 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:36:10.927380 kubelet[2550]: I0113 21:36:10.927362 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:36:10.927478 kubelet[2550]: I0113 21:36:10.927469 2550 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:36:10.927552 kubelet[2550]: I0113 21:36:10.927544 2550 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:36:10.928311 kubelet[2550]: E0113 21:36:10.927839 2550 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:36:10.928781 kubelet[2550]: E0113 21:36:10.928763 2550 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:36:10.931248 kubelet[2550]: I0113 21:36:10.927420 2550 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:36:10.931363 kubelet[2550]: I0113 21:36:10.931313 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:36:10.933255 kubelet[2550]: I0113 21:36:10.932812 2550 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:36:10.962379 kubelet[2550]: I0113 21:36:10.962356 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:36:10.962379 kubelet[2550]: I0113 21:36:10.962375 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:36:10.962488 kubelet[2550]: I0113 21:36:10.962393 2550 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:36:10.962588 kubelet[2550]: I0113 21:36:10.962554 2550 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:36:10.962611 kubelet[2550]: I0113 21:36:10.962589 2550 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:36:10.962638 kubelet[2550]: I0113 21:36:10.962615 2550 policy_none.go:49] "None policy: Start" Jan 13 21:36:10.963126 kubelet[2550]: I0113 21:36:10.963101 2550 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:36:10.963126 kubelet[2550]: I0113 21:36:10.963125 2550 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:36:10.963316 kubelet[2550]: I0113 21:36:10.963300 2550 state_mem.go:75] "Updated machine memory state" Jan 13 21:36:10.967887 kubelet[2550]: I0113 21:36:10.967867 2550 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:36:10.968051 kubelet[2550]: I0113 21:36:10.968020 2550 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:36:10.968306 kubelet[2550]: I0113 21:36:10.968103 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:36:11.017640 kubelet[2550]: I0113 21:36:11.017606 2550 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:36:11.023607 kubelet[2550]: I0113 21:36:11.023553 2550 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:36:11.023701 kubelet[2550]: I0113 21:36:11.023624 2550 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:36:11.027997 kubelet[2550]: I0113 21:36:11.027952 2550 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:36:11.028078 kubelet[2550]: I0113 21:36:11.028059 2550 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:36:11.028121 kubelet[2550]: I0113 21:36:11.028103 2550 topology_manager.go:215] "Topology Admit Handler" podUID="a92ef1260efb27134f73841b7103d125" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:36:11.114565 kubelet[2550]: I0113 21:36:11.114521 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:11.114565 kubelet[2550]: I0113 21:36:11.114563 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:11.114565 kubelet[2550]: I0113 21:36:11.114582 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:11.114740 kubelet[2550]: I0113 21:36:11.114598 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:11.114740 kubelet[2550]: I0113 21:36:11.114623 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:11.114740 kubelet[2550]: I0113 21:36:11.114640 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:36:11.114740 kubelet[2550]: I0113 21:36:11.114654 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a92ef1260efb27134f73841b7103d125-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a92ef1260efb27134f73841b7103d125\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:36:11.114740 kubelet[2550]: I0113 21:36:11.114668 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a92ef1260efb27134f73841b7103d125-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a92ef1260efb27134f73841b7103d125\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:36:11.114851 kubelet[2550]: I0113 21:36:11.114682 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a92ef1260efb27134f73841b7103d125-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a92ef1260efb27134f73841b7103d125\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:36:11.367057 kubelet[2550]: E0113 21:36:11.366952 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:11.368821 kubelet[2550]: E0113 21:36:11.368654 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:11.368821 kubelet[2550]: E0113 21:36:11.368739 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:11.905593 kubelet[2550]: I0113 21:36:11.905553 2550 apiserver.go:52] "Watching apiserver" Jan 13 21:36:11.914285 kubelet[2550]: I0113 21:36:11.914233 2550 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:36:11.950273 kubelet[2550]: E0113 21:36:11.950211 2550 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:36:11.950720 kubelet[2550]: E0113 21:36:11.950696 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:11.950796 kubelet[2550]: E0113 21:36:11.950740 2550 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:36:11.950796 kubelet[2550]: E0113 21:36:11.950774 2550 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:36:11.951182 kubelet[2550]: E0113 21:36:11.951145 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:11.951799 kubelet[2550]: E0113 21:36:11.951127 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:11.995296 kubelet[2550]: I0113 21:36:11.995223 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.995207922 podStartE2EDuration="995.207922ms" podCreationTimestamp="2025-01-13 21:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:36:11.995136276 +0000 UTC m=+1.140071800" watchObservedRunningTime="2025-01-13 21:36:11.995207922 +0000 UTC m=+1.140143446" Jan 13 21:36:12.019055 kubelet[2550]: I0113 21:36:12.018872 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.018860944 podStartE2EDuration="1.018860944s" podCreationTimestamp="2025-01-13 21:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:36:12.018122695 +0000 UTC m=+1.163058219" watchObservedRunningTime="2025-01-13 21:36:12.018860944 +0000 UTC m=+1.163796468" Jan 13 21:36:12.024614 kubelet[2550]: I0113 21:36:12.024518 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.024509139 podStartE2EDuration="1.024509139s" podCreationTimestamp="2025-01-13 21:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:36:12.024499093 +0000 UTC m=+1.169434617" watchObservedRunningTime="2025-01-13 21:36:12.024509139 +0000 UTC m=+1.169444663" Jan 13 21:36:12.956290 kubelet[2550]: E0113 21:36:12.956143 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:12.957649 kubelet[2550]: E0113 21:36:12.957061 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:12.957649 kubelet[2550]: E0113 21:36:12.957601 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:15.716653 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 13 21:36:15.719717 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 13 21:36:15.724795 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:36:15.724990 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:34218.service: Deactivated successfully. Jan 13 21:36:15.727159 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:36:15.727974 systemd[1]: session-7.scope: Consumed 8.605s CPU time, 188.6M memory peak, 0B memory swap peak. Jan 13 21:36:15.728718 systemd-logind[1415]: Removed session 7. Jan 13 21:36:18.116519 update_engine[1422]: I20250113 21:36:18.116166 1422 update_attempter.cc:509] Updating boot flags... Jan 13 21:36:18.149270 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2651) Jan 13 21:36:18.175265 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2652) Jan 13 21:36:19.671672 kubelet[2550]: E0113 21:36:19.671586 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:19.958866 kubelet[2550]: E0113 21:36:19.958761 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:20.720770 kubelet[2550]: E0113 21:36:20.720707 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:20.960622 kubelet[2550]: E0113 21:36:20.960529 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:21.520046 kubelet[2550]: E0113 21:36:21.520011 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:25.577601 kubelet[2550]: I0113 21:36:25.577558 2550 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:36:25.580583 containerd[1434]: time="2025-01-13T21:36:25.580533621Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:36:25.581720 kubelet[2550]: I0113 21:36:25.581134 2550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:36:26.506765 kubelet[2550]: I0113 21:36:26.506316 2550 topology_manager.go:215] "Topology Admit Handler" podUID="990d9bc7-5b1b-4153-b4d4-1688fd2beffb" podNamespace="kube-system" podName="kube-proxy-tvcjn" Jan 13 21:36:26.516917 systemd[1]: Created slice kubepods-besteffort-pod990d9bc7_5b1b_4153_b4d4_1688fd2beffb.slice - libcontainer container kubepods-besteffort-pod990d9bc7_5b1b_4153_b4d4_1688fd2beffb.slice. Jan 13 21:36:26.526311 kubelet[2550]: I0113 21:36:26.526277 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/990d9bc7-5b1b-4153-b4d4-1688fd2beffb-xtables-lock\") pod \"kube-proxy-tvcjn\" (UID: \"990d9bc7-5b1b-4153-b4d4-1688fd2beffb\") " pod="kube-system/kube-proxy-tvcjn" Jan 13 21:36:26.526438 kubelet[2550]: I0113 21:36:26.526411 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/990d9bc7-5b1b-4153-b4d4-1688fd2beffb-kube-proxy\") pod \"kube-proxy-tvcjn\" (UID: \"990d9bc7-5b1b-4153-b4d4-1688fd2beffb\") " pod="kube-system/kube-proxy-tvcjn" Jan 13 21:36:26.526568 kubelet[2550]: I0113 21:36:26.526443 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/990d9bc7-5b1b-4153-b4d4-1688fd2beffb-lib-modules\") pod \"kube-proxy-tvcjn\" (UID: \"990d9bc7-5b1b-4153-b4d4-1688fd2beffb\") " pod="kube-system/kube-proxy-tvcjn" Jan 13 21:36:26.526568 kubelet[2550]: I0113 21:36:26.526461 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqzzk\" (UniqueName: \"kubernetes.io/projected/990d9bc7-5b1b-4153-b4d4-1688fd2beffb-kube-api-access-xqzzk\") pod \"kube-proxy-tvcjn\" (UID: \"990d9bc7-5b1b-4153-b4d4-1688fd2beffb\") " pod="kube-system/kube-proxy-tvcjn" Jan 13 21:36:26.611377 kubelet[2550]: I0113 21:36:26.609454 2550 topology_manager.go:215] "Topology Admit Handler" podUID="e2608fa8-c694-4ff0-a187-bbf9d77cad00" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-r8tfd" Jan 13 21:36:26.611377 kubelet[2550]: W0113 21:36:26.610893 2550 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 13 21:36:26.611377 kubelet[2550]: E0113 21:36:26.610932 2550 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 13 21:36:26.611377 kubelet[2550]: W0113 21:36:26.611170 2550 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 13 21:36:26.611377 kubelet[2550]: E0113 21:36:26.611198 2550 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 13 21:36:26.618335 systemd[1]: Created slice kubepods-besteffort-pode2608fa8_c694_4ff0_a187_bbf9d77cad00.slice - libcontainer container kubepods-besteffort-pode2608fa8_c694_4ff0_a187_bbf9d77cad00.slice. Jan 13 21:36:26.627021 kubelet[2550]: I0113 21:36:26.626975 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hbgn\" (UniqueName: \"kubernetes.io/projected/e2608fa8-c694-4ff0-a187-bbf9d77cad00-kube-api-access-6hbgn\") pod \"tigera-operator-7bc55997bb-r8tfd\" (UID: \"e2608fa8-c694-4ff0-a187-bbf9d77cad00\") " pod="tigera-operator/tigera-operator-7bc55997bb-r8tfd" Jan 13 21:36:26.627021 kubelet[2550]: I0113 21:36:26.627036 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2608fa8-c694-4ff0-a187-bbf9d77cad00-var-lib-calico\") pod \"tigera-operator-7bc55997bb-r8tfd\" (UID: \"e2608fa8-c694-4ff0-a187-bbf9d77cad00\") " pod="tigera-operator/tigera-operator-7bc55997bb-r8tfd" Jan 13 21:36:26.829834 kubelet[2550]: E0113 21:36:26.829734 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:26.832323 containerd[1434]: time="2025-01-13T21:36:26.832206976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvcjn,Uid:990d9bc7-5b1b-4153-b4d4-1688fd2beffb,Namespace:kube-system,Attempt:0,}" Jan 13 21:36:26.851524 containerd[1434]: time="2025-01-13T21:36:26.851036935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:36:26.851524 containerd[1434]: time="2025-01-13T21:36:26.851437258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:36:26.851524 containerd[1434]: time="2025-01-13T21:36:26.851450181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:26.851886 containerd[1434]: time="2025-01-13T21:36:26.851767278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:26.869481 systemd[1]: Started cri-containerd-23be2dca67637a7e8bf34f0287d60ab8269164ce0e8f6283831ca5d0524fb8e0.scope - libcontainer container 23be2dca67637a7e8bf34f0287d60ab8269164ce0e8f6283831ca5d0524fb8e0. Jan 13 21:36:26.887459 containerd[1434]: time="2025-01-13T21:36:26.887422583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvcjn,Uid:990d9bc7-5b1b-4153-b4d4-1688fd2beffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"23be2dca67637a7e8bf34f0287d60ab8269164ce0e8f6283831ca5d0524fb8e0\"" Jan 13 21:36:26.890428 kubelet[2550]: E0113 21:36:26.890404 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:26.896107 containerd[1434]: time="2025-01-13T21:36:26.896065547Z" level=info msg="CreateContainer within sandbox \"23be2dca67637a7e8bf34f0287d60ab8269164ce0e8f6283831ca5d0524fb8e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:36:26.908102 containerd[1434]: time="2025-01-13T21:36:26.908054934Z" level=info msg="CreateContainer within sandbox \"23be2dca67637a7e8bf34f0287d60ab8269164ce0e8f6283831ca5d0524fb8e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d4055a1d26f932479442d4e6809aeb0e4e9ad5aba85d63f002e4659f2807910a\"" Jan 13 21:36:26.908855 containerd[1434]: time="2025-01-13T21:36:26.908826170Z" level=info msg="StartContainer for \"d4055a1d26f932479442d4e6809aeb0e4e9ad5aba85d63f002e4659f2807910a\"" Jan 13 21:36:26.932414 systemd[1]: Started cri-containerd-d4055a1d26f932479442d4e6809aeb0e4e9ad5aba85d63f002e4659f2807910a.scope - libcontainer container d4055a1d26f932479442d4e6809aeb0e4e9ad5aba85d63f002e4659f2807910a. Jan 13 21:36:26.975136 containerd[1434]: time="2025-01-13T21:36:26.975057306Z" level=info msg="StartContainer for \"d4055a1d26f932479442d4e6809aeb0e4e9ad5aba85d63f002e4659f2807910a\" returns successfully" Jan 13 21:36:27.823072 containerd[1434]: time="2025-01-13T21:36:27.823021236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r8tfd,Uid:e2608fa8-c694-4ff0-a187-bbf9d77cad00,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:36:27.848320 containerd[1434]: time="2025-01-13T21:36:27.848204694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:36:27.848320 containerd[1434]: time="2025-01-13T21:36:27.848286918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:36:27.848320 containerd[1434]: time="2025-01-13T21:36:27.848303083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:27.848693 containerd[1434]: time="2025-01-13T21:36:27.848392629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:27.868468 systemd[1]: Started cri-containerd-5d7a7c1152c55d66ca2805bf32162691ad274726a410328da4928ee50de54564.scope - libcontainer container 5d7a7c1152c55d66ca2805bf32162691ad274726a410328da4928ee50de54564. Jan 13 21:36:27.891813 containerd[1434]: time="2025-01-13T21:36:27.891776540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r8tfd,Uid:e2608fa8-c694-4ff0-a187-bbf9d77cad00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5d7a7c1152c55d66ca2805bf32162691ad274726a410328da4928ee50de54564\"" Jan 13 21:36:27.894004 containerd[1434]: time="2025-01-13T21:36:27.893970943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:36:27.979860 kubelet[2550]: E0113 21:36:27.979832 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:27.987961 kubelet[2550]: I0113 21:36:27.987823 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tvcjn" podStartSLOduration=1.987808915 podStartE2EDuration="1.987808915s" podCreationTimestamp="2025-01-13 21:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:36:27.987262435 +0000 UTC m=+17.132197959" watchObservedRunningTime="2025-01-13 21:36:27.987808915 +0000 UTC m=+17.132744439" Jan 13 21:36:33.782104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806816193.mount: Deactivated successfully. Jan 13 21:36:34.041067 containerd[1434]: time="2025-01-13T21:36:34.040951866Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:34.041752 containerd[1434]: time="2025-01-13T21:36:34.041714476Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125356" Jan 13 21:36:34.042381 containerd[1434]: time="2025-01-13T21:36:34.042351698Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:34.044655 containerd[1434]: time="2025-01-13T21:36:34.044626444Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:34.045594 containerd[1434]: time="2025-01-13T21:36:34.045558492Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 6.151551019s" Jan 13 21:36:34.045594 containerd[1434]: time="2025-01-13T21:36:34.045592500Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 13 21:36:34.049773 containerd[1434]: time="2025-01-13T21:36:34.049738023Z" level=info msg="CreateContainer within sandbox \"5d7a7c1152c55d66ca2805bf32162691ad274726a410328da4928ee50de54564\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:36:34.061076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520674392.mount: Deactivated successfully. Jan 13 21:36:34.062617 containerd[1434]: time="2025-01-13T21:36:34.062580524Z" level=info msg="CreateContainer within sandbox \"5d7a7c1152c55d66ca2805bf32162691ad274726a410328da4928ee50de54564\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f27987444ca9cb9eab373a0cad185013ff408e0106b022c7c2f4721fc50a872a\"" Jan 13 21:36:34.064008 containerd[1434]: time="2025-01-13T21:36:34.063953790Z" level=info msg="StartContainer for \"f27987444ca9cb9eab373a0cad185013ff408e0106b022c7c2f4721fc50a872a\"" Jan 13 21:36:34.092453 systemd[1]: Started cri-containerd-f27987444ca9cb9eab373a0cad185013ff408e0106b022c7c2f4721fc50a872a.scope - libcontainer container f27987444ca9cb9eab373a0cad185013ff408e0106b022c7c2f4721fc50a872a. Jan 13 21:36:34.126503 containerd[1434]: time="2025-01-13T21:36:34.126452234Z" level=info msg="StartContainer for \"f27987444ca9cb9eab373a0cad185013ff408e0106b022c7c2f4721fc50a872a\" returns successfully" Jan 13 21:36:37.780259 kubelet[2550]: I0113 21:36:37.780100 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-r8tfd" podStartSLOduration=5.625441232 podStartE2EDuration="11.780079744s" podCreationTimestamp="2025-01-13 21:36:26 +0000 UTC" firstStartedPulling="2025-01-13 21:36:27.892794918 +0000 UTC m=+17.037730442" lastFinishedPulling="2025-01-13 21:36:34.04743343 +0000 UTC m=+23.192368954" observedRunningTime="2025-01-13 21:36:35.002020538 +0000 UTC m=+24.146956062" watchObservedRunningTime="2025-01-13 21:36:37.780079744 +0000 UTC m=+26.925015268" Jan 13 21:36:37.780672 kubelet[2550]: I0113 21:36:37.780354 2550 topology_manager.go:215] "Topology Admit Handler" podUID="0f2c1816-a914-407e-a25d-d5708ff95c0f" podNamespace="calico-system" podName="calico-typha-6bd94fdf8-8gnwf" Jan 13 21:36:37.789301 systemd[1]: Created slice kubepods-besteffort-pod0f2c1816_a914_407e_a25d_d5708ff95c0f.slice - libcontainer container kubepods-besteffort-pod0f2c1816_a914_407e_a25d_d5708ff95c0f.slice. Jan 13 21:36:37.803598 kubelet[2550]: I0113 21:36:37.803502 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f2c1816-a914-407e-a25d-d5708ff95c0f-tigera-ca-bundle\") pod \"calico-typha-6bd94fdf8-8gnwf\" (UID: \"0f2c1816-a914-407e-a25d-d5708ff95c0f\") " pod="calico-system/calico-typha-6bd94fdf8-8gnwf" Jan 13 21:36:37.803928 kubelet[2550]: I0113 21:36:37.803817 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f2c1816-a914-407e-a25d-d5708ff95c0f-typha-certs\") pod \"calico-typha-6bd94fdf8-8gnwf\" (UID: \"0f2c1816-a914-407e-a25d-d5708ff95c0f\") " pod="calico-system/calico-typha-6bd94fdf8-8gnwf" Jan 13 21:36:37.803928 kubelet[2550]: I0113 21:36:37.803887 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdpx\" (UniqueName: \"kubernetes.io/projected/0f2c1816-a914-407e-a25d-d5708ff95c0f-kube-api-access-ngdpx\") pod \"calico-typha-6bd94fdf8-8gnwf\" (UID: \"0f2c1816-a914-407e-a25d-d5708ff95c0f\") " pod="calico-system/calico-typha-6bd94fdf8-8gnwf" Jan 13 21:36:37.890994 kubelet[2550]: I0113 21:36:37.890941 2550 topology_manager.go:215] "Topology Admit Handler" podUID="d3a91aed-3793-424b-b53e-f7eff9842cb7" podNamespace="calico-system" podName="calico-node-h89v9" Jan 13 21:36:37.900174 systemd[1]: Created slice kubepods-besteffort-podd3a91aed_3793_424b_b53e_f7eff9842cb7.slice - libcontainer container kubepods-besteffort-podd3a91aed_3793_424b_b53e_f7eff9842cb7.slice. Jan 13 21:36:37.904826 kubelet[2550]: I0113 21:36:37.904779 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-policysync\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.904826 kubelet[2550]: I0113 21:36:37.904821 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-lib-modules\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.904990 kubelet[2550]: I0113 21:36:37.904841 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-cni-net-dir\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.904990 kubelet[2550]: I0113 21:36:37.904869 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d3a91aed-3793-424b-b53e-f7eff9842cb7-node-certs\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.904990 kubelet[2550]: I0113 21:36:37.904884 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-cni-bin-dir\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.904990 kubelet[2550]: I0113 21:36:37.904911 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-var-lib-calico\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.904990 kubelet[2550]: I0113 21:36:37.904928 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a91aed-3793-424b-b53e-f7eff9842cb7-tigera-ca-bundle\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.905128 kubelet[2550]: I0113 21:36:37.904984 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-cni-log-dir\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.905128 kubelet[2550]: I0113 21:36:37.905001 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-flexvol-driver-host\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.905128 kubelet[2550]: I0113 21:36:37.905019 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-xtables-lock\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.905128 kubelet[2550]: I0113 21:36:37.905035 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d3a91aed-3793-424b-b53e-f7eff9842cb7-var-run-calico\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:37.905128 kubelet[2550]: I0113 21:36:37.905057 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn849\" (UniqueName: \"kubernetes.io/projected/d3a91aed-3793-424b-b53e-f7eff9842cb7-kube-api-access-gn849\") pod \"calico-node-h89v9\" (UID: \"d3a91aed-3793-424b-b53e-f7eff9842cb7\") " pod="calico-system/calico-node-h89v9" Jan 13 21:36:38.012858 kubelet[2550]: E0113 21:36:38.012813 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.012858 kubelet[2550]: W0113 21:36:38.012842 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.012858 kubelet[2550]: E0113 21:36:38.012863 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.013366 kubelet[2550]: E0113 21:36:38.013343 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.013366 kubelet[2550]: W0113 21:36:38.013361 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.013366 kubelet[2550]: E0113 21:36:38.013372 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.013573 kubelet[2550]: I0113 21:36:38.013550 2550 topology_manager.go:215] "Topology Admit Handler" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" podNamespace="calico-system" podName="csi-node-driver-k28zl" Jan 13 21:36:38.013852 kubelet[2550]: E0113 21:36:38.013818 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k28zl" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" Jan 13 21:36:38.029190 kubelet[2550]: E0113 21:36:38.029149 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.029190 kubelet[2550]: W0113 21:36:38.029175 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.029190 kubelet[2550]: E0113 21:36:38.029196 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.094884 kubelet[2550]: E0113 21:36:38.094772 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:38.095213 containerd[1434]: time="2025-01-13T21:36:38.095177334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd94fdf8-8gnwf,Uid:0f2c1816-a914-407e-a25d-d5708ff95c0f,Namespace:calico-system,Attempt:0,}" Jan 13 21:36:38.098550 kubelet[2550]: E0113 21:36:38.094586 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.098550 kubelet[2550]: W0113 21:36:38.098549 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.098550 kubelet[2550]: E0113 21:36:38.098569 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.099081 kubelet[2550]: E0113 21:36:38.098998 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.099081 kubelet[2550]: W0113 21:36:38.099010 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.099081 kubelet[2550]: E0113 21:36:38.099019 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.099184 kubelet[2550]: E0113 21:36:38.099167 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.099184 kubelet[2550]: W0113 21:36:38.099175 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.099228 kubelet[2550]: E0113 21:36:38.099183 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.099385 kubelet[2550]: E0113 21:36:38.099369 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.099385 kubelet[2550]: W0113 21:36:38.099379 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.099769 kubelet[2550]: E0113 21:36:38.099388 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.099769 kubelet[2550]: E0113 21:36:38.099616 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.099769 kubelet[2550]: W0113 21:36:38.099622 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.099769 kubelet[2550]: E0113 21:36:38.099629 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.099860 kubelet[2550]: E0113 21:36:38.099779 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.099860 kubelet[2550]: W0113 21:36:38.099786 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.099860 kubelet[2550]: E0113 21:36:38.099793 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.099955 kubelet[2550]: E0113 21:36:38.099942 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.099955 kubelet[2550]: W0113 21:36:38.099952 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.100014 kubelet[2550]: E0113 21:36:38.099959 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.100120 kubelet[2550]: E0113 21:36:38.100108 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.100145 kubelet[2550]: W0113 21:36:38.100119 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.100145 kubelet[2550]: E0113 21:36:38.100128 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.100287 kubelet[2550]: E0113 21:36:38.100275 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.100321 kubelet[2550]: W0113 21:36:38.100286 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.100321 kubelet[2550]: E0113 21:36:38.100297 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.100445 kubelet[2550]: E0113 21:36:38.100434 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.100469 kubelet[2550]: W0113 21:36:38.100445 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.100469 kubelet[2550]: E0113 21:36:38.100452 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.100605 kubelet[2550]: E0113 21:36:38.100581 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.100605 kubelet[2550]: W0113 21:36:38.100591 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.100605 kubelet[2550]: E0113 21:36:38.100598 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.100858 kubelet[2550]: E0113 21:36:38.100775 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.100858 kubelet[2550]: W0113 21:36:38.100786 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.100858 kubelet[2550]: E0113 21:36:38.100794 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.101029 kubelet[2550]: E0113 21:36:38.101016 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.101029 kubelet[2550]: W0113 21:36:38.101026 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.101078 kubelet[2550]: E0113 21:36:38.101033 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.101191 kubelet[2550]: E0113 21:36:38.101179 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.101191 kubelet[2550]: W0113 21:36:38.101189 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.101263 kubelet[2550]: E0113 21:36:38.101196 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.101416 kubelet[2550]: E0113 21:36:38.101401 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.101447 kubelet[2550]: W0113 21:36:38.101418 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.101447 kubelet[2550]: E0113 21:36:38.101428 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.101623 kubelet[2550]: E0113 21:36:38.101610 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.101623 kubelet[2550]: W0113 21:36:38.101622 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.101711 kubelet[2550]: E0113 21:36:38.101630 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.101834 kubelet[2550]: E0113 21:36:38.101821 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.101834 kubelet[2550]: W0113 21:36:38.101833 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.101876 kubelet[2550]: E0113 21:36:38.101841 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.102009 kubelet[2550]: E0113 21:36:38.101996 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.102009 kubelet[2550]: W0113 21:36:38.102007 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.102081 kubelet[2550]: E0113 21:36:38.102014 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.102177 kubelet[2550]: E0113 21:36:38.102165 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.102177 kubelet[2550]: W0113 21:36:38.102175 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.102219 kubelet[2550]: E0113 21:36:38.102182 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.102369 kubelet[2550]: E0113 21:36:38.102355 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.102369 kubelet[2550]: W0113 21:36:38.102366 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.102445 kubelet[2550]: E0113 21:36:38.102374 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.106924 kubelet[2550]: E0113 21:36:38.106885 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.106924 kubelet[2550]: W0113 21:36:38.106921 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.107039 kubelet[2550]: E0113 21:36:38.106934 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.107039 kubelet[2550]: I0113 21:36:38.106962 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fqd\" (UniqueName: \"kubernetes.io/projected/c152f2aa-4163-46d5-8b4d-dd73349b1e5d-kube-api-access-75fqd\") pod \"csi-node-driver-k28zl\" (UID: \"c152f2aa-4163-46d5-8b4d-dd73349b1e5d\") " pod="calico-system/csi-node-driver-k28zl" Jan 13 21:36:38.107220 kubelet[2550]: E0113 21:36:38.107201 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.107311 kubelet[2550]: W0113 21:36:38.107222 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.107311 kubelet[2550]: E0113 21:36:38.107242 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.107311 kubelet[2550]: I0113 21:36:38.107276 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c152f2aa-4163-46d5-8b4d-dd73349b1e5d-varrun\") pod \"csi-node-driver-k28zl\" (UID: \"c152f2aa-4163-46d5-8b4d-dd73349b1e5d\") " pod="calico-system/csi-node-driver-k28zl" Jan 13 21:36:38.107500 kubelet[2550]: E0113 21:36:38.107485 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.107500 kubelet[2550]: W0113 21:36:38.107499 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.107554 kubelet[2550]: E0113 21:36:38.107513 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.107554 kubelet[2550]: I0113 21:36:38.107528 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c152f2aa-4163-46d5-8b4d-dd73349b1e5d-kubelet-dir\") pod \"csi-node-driver-k28zl\" (UID: \"c152f2aa-4163-46d5-8b4d-dd73349b1e5d\") " pod="calico-system/csi-node-driver-k28zl" Jan 13 21:36:38.107833 kubelet[2550]: E0113 21:36:38.107816 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.107833 kubelet[2550]: W0113 21:36:38.107830 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.107900 kubelet[2550]: E0113 21:36:38.107849 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.107900 kubelet[2550]: I0113 21:36:38.107867 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c152f2aa-4163-46d5-8b4d-dd73349b1e5d-socket-dir\") pod \"csi-node-driver-k28zl\" (UID: \"c152f2aa-4163-46d5-8b4d-dd73349b1e5d\") " pod="calico-system/csi-node-driver-k28zl" Jan 13 21:36:38.108121 kubelet[2550]: E0113 21:36:38.108100 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.108121 kubelet[2550]: W0113 21:36:38.108118 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.108181 kubelet[2550]: E0113 21:36:38.108132 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.108181 kubelet[2550]: I0113 21:36:38.108148 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c152f2aa-4163-46d5-8b4d-dd73349b1e5d-registration-dir\") pod \"csi-node-driver-k28zl\" (UID: \"c152f2aa-4163-46d5-8b4d-dd73349b1e5d\") " pod="calico-system/csi-node-driver-k28zl" Jan 13 21:36:38.108668 kubelet[2550]: E0113 21:36:38.108433 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.108668 kubelet[2550]: W0113 21:36:38.108449 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.108668 kubelet[2550]: E0113 21:36:38.108534 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.108668 kubelet[2550]: E0113 21:36:38.108666 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.108668 kubelet[2550]: W0113 21:36:38.108674 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.108832 kubelet[2550]: E0113 21:36:38.108761 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.108832 kubelet[2550]: E0113 21:36:38.108817 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.108884 kubelet[2550]: W0113 21:36:38.108834 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.110283 kubelet[2550]: E0113 21:36:38.108918 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.110283 kubelet[2550]: E0113 21:36:38.108968 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.110283 kubelet[2550]: W0113 21:36:38.108995 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.110283 kubelet[2550]: E0113 21:36:38.109045 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.110283 kubelet[2550]: E0113 21:36:38.109191 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.110283 kubelet[2550]: W0113 21:36:38.109199 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.110283 kubelet[2550]: E0113 21:36:38.109269 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.110283 kubelet[2550]: E0113 21:36:38.109481 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.110283 kubelet[2550]: W0113 21:36:38.109491 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.110283 kubelet[2550]: E0113 21:36:38.109502 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.110550 kubelet[2550]: E0113 21:36:38.109705 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.110550 kubelet[2550]: W0113 21:36:38.109713 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.110550 kubelet[2550]: E0113 21:36:38.109723 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.110550 kubelet[2550]: E0113 21:36:38.109876 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.110550 kubelet[2550]: W0113 21:36:38.109883 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.110550 kubelet[2550]: E0113 21:36:38.109891 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.110550 kubelet[2550]: E0113 21:36:38.110043 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.110550 kubelet[2550]: W0113 21:36:38.110050 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.110550 kubelet[2550]: E0113 21:36:38.110058 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.110550 kubelet[2550]: E0113 21:36:38.110189 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.111420 kubelet[2550]: W0113 21:36:38.110196 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.111420 kubelet[2550]: E0113 21:36:38.110203 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.119699 containerd[1434]: time="2025-01-13T21:36:38.119120799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:36:38.119699 containerd[1434]: time="2025-01-13T21:36:38.119550123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:36:38.120652 containerd[1434]: time="2025-01-13T21:36:38.119572727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:38.120652 containerd[1434]: time="2025-01-13T21:36:38.119658224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:38.149414 systemd[1]: Started cri-containerd-c8bc66d4a0e343666253002ee7019f0d8ed1c56cfb7236b16377bf4682568b94.scope - libcontainer container c8bc66d4a0e343666253002ee7019f0d8ed1c56cfb7236b16377bf4682568b94. Jan 13 21:36:38.175419 containerd[1434]: time="2025-01-13T21:36:38.175355995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd94fdf8-8gnwf,Uid:0f2c1816-a914-407e-a25d-d5708ff95c0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8bc66d4a0e343666253002ee7019f0d8ed1c56cfb7236b16377bf4682568b94\"" Jan 13 21:36:38.176396 kubelet[2550]: E0113 21:36:38.176363 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:38.177404 containerd[1434]: time="2025-01-13T21:36:38.177371588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:36:38.203385 kubelet[2550]: E0113 21:36:38.203347 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:38.203796 containerd[1434]: time="2025-01-13T21:36:38.203720361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h89v9,Uid:d3a91aed-3793-424b-b53e-f7eff9842cb7,Namespace:calico-system,Attempt:0,}" Jan 13 21:36:38.209528 kubelet[2550]: E0113 21:36:38.209394 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.209528 kubelet[2550]: W0113 21:36:38.209415 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.209528 kubelet[2550]: E0113 21:36:38.209432 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.209672 kubelet[2550]: E0113 21:36:38.209653 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.209672 kubelet[2550]: W0113 21:36:38.209661 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.209672 kubelet[2550]: E0113 21:36:38.209676 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.209903 kubelet[2550]: E0113 21:36:38.209865 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.209903 kubelet[2550]: W0113 21:36:38.209878 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.209903 kubelet[2550]: E0113 21:36:38.209890 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.210170 kubelet[2550]: E0113 21:36:38.210054 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.210170 kubelet[2550]: W0113 21:36:38.210066 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.210170 kubelet[2550]: E0113 21:36:38.210075 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.210359 kubelet[2550]: E0113 21:36:38.210343 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.210420 kubelet[2550]: W0113 21:36:38.210409 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.210567 kubelet[2550]: E0113 21:36:38.210480 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.210705 kubelet[2550]: E0113 21:36:38.210692 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.210843 kubelet[2550]: W0113 21:36:38.210753 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.210843 kubelet[2550]: E0113 21:36:38.210782 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.210975 kubelet[2550]: E0113 21:36:38.210963 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.211118 kubelet[2550]: W0113 21:36:38.211030 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.211118 kubelet[2550]: E0113 21:36:38.211061 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.211270 kubelet[2550]: E0113 21:36:38.211256 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.211336 kubelet[2550]: W0113 21:36:38.211325 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.211399 kubelet[2550]: E0113 21:36:38.211388 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.211636 kubelet[2550]: E0113 21:36:38.211602 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.211636 kubelet[2550]: W0113 21:36:38.211616 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.211636 kubelet[2550]: E0113 21:36:38.211628 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.211801 kubelet[2550]: E0113 21:36:38.211760 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.211801 kubelet[2550]: W0113 21:36:38.211767 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.211801 kubelet[2550]: E0113 21:36:38.211775 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.211944 kubelet[2550]: E0113 21:36:38.211928 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.211944 kubelet[2550]: W0113 21:36:38.211939 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.212034 kubelet[2550]: E0113 21:36:38.212015 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.212092 kubelet[2550]: E0113 21:36:38.212082 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.212092 kubelet[2550]: W0113 21:36:38.212090 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.212172 kubelet[2550]: E0113 21:36:38.212160 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.212262 kubelet[2550]: E0113 21:36:38.212244 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.212262 kubelet[2550]: W0113 21:36:38.212255 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.212332 kubelet[2550]: E0113 21:36:38.212324 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.212404 kubelet[2550]: E0113 21:36:38.212394 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.212404 kubelet[2550]: W0113 21:36:38.212403 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.212459 kubelet[2550]: E0113 21:36:38.212430 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.212557 kubelet[2550]: E0113 21:36:38.212546 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.212557 kubelet[2550]: W0113 21:36:38.212556 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.212618 kubelet[2550]: E0113 21:36:38.212567 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.212720 kubelet[2550]: E0113 21:36:38.212706 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.212720 kubelet[2550]: W0113 21:36:38.212716 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.212785 kubelet[2550]: E0113 21:36:38.212724 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.212861 kubelet[2550]: E0113 21:36:38.212850 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.212861 kubelet[2550]: W0113 21:36:38.212860 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.212918 kubelet[2550]: E0113 21:36:38.212872 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.213282 kubelet[2550]: E0113 21:36:38.213151 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.213282 kubelet[2550]: W0113 21:36:38.213167 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.213282 kubelet[2550]: E0113 21:36:38.213184 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.213451 kubelet[2550]: E0113 21:36:38.213438 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.213507 kubelet[2550]: W0113 21:36:38.213497 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.213575 kubelet[2550]: E0113 21:36:38.213564 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.213798 kubelet[2550]: E0113 21:36:38.213772 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.213798 kubelet[2550]: W0113 21:36:38.213783 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.213980 kubelet[2550]: E0113 21:36:38.213930 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.214106 kubelet[2550]: E0113 21:36:38.214093 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.214204 kubelet[2550]: W0113 21:36:38.214152 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.214204 kubelet[2550]: E0113 21:36:38.214183 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.214597 kubelet[2550]: E0113 21:36:38.214523 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.214597 kubelet[2550]: W0113 21:36:38.214538 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.214597 kubelet[2550]: E0113 21:36:38.214562 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.214938 kubelet[2550]: E0113 21:36:38.214826 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.214938 kubelet[2550]: W0113 21:36:38.214838 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.214938 kubelet[2550]: E0113 21:36:38.214856 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.215118 kubelet[2550]: E0113 21:36:38.215107 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.215432 kubelet[2550]: W0113 21:36:38.215164 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.215432 kubelet[2550]: E0113 21:36:38.215181 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.223268 kubelet[2550]: E0113 21:36:38.223189 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.223366 kubelet[2550]: W0113 21:36:38.223278 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.223366 kubelet[2550]: E0113 21:36:38.223301 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.232092 kubelet[2550]: E0113 21:36:38.232056 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:38.232092 kubelet[2550]: W0113 21:36:38.232077 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:38.232092 kubelet[2550]: E0113 21:36:38.232095 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:38.250613 containerd[1434]: time="2025-01-13T21:36:38.250468109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:36:38.250613 containerd[1434]: time="2025-01-13T21:36:38.250519959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:36:38.250613 containerd[1434]: time="2025-01-13T21:36:38.250538763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:38.250799 containerd[1434]: time="2025-01-13T21:36:38.250613858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:36:38.273397 systemd[1]: Started cri-containerd-93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b.scope - libcontainer container 93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b. Jan 13 21:36:38.295095 containerd[1434]: time="2025-01-13T21:36:38.295031751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h89v9,Uid:d3a91aed-3793-424b-b53e-f7eff9842cb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b\"" Jan 13 21:36:38.295869 kubelet[2550]: E0113 21:36:38.295840 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:39.201515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount563648716.mount: Deactivated successfully. Jan 13 21:36:39.795709 containerd[1434]: time="2025-01-13T21:36:39.795648957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:39.797061 containerd[1434]: time="2025-01-13T21:36:39.797014615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 13 21:36:39.797884 containerd[1434]: time="2025-01-13T21:36:39.797842251Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:39.800032 containerd[1434]: time="2025-01-13T21:36:39.799988536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:39.800918 containerd[1434]: time="2025-01-13T21:36:39.800790688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.623382373s" Jan 13 21:36:39.800918 containerd[1434]: time="2025-01-13T21:36:39.800822174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 13 21:36:39.802531 containerd[1434]: time="2025-01-13T21:36:39.802366666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:36:39.815546 containerd[1434]: time="2025-01-13T21:36:39.815427973Z" level=info msg="CreateContainer within sandbox \"c8bc66d4a0e343666253002ee7019f0d8ed1c56cfb7236b16377bf4682568b94\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:36:39.883586 containerd[1434]: time="2025-01-13T21:36:39.883451222Z" level=info msg="CreateContainer within sandbox \"c8bc66d4a0e343666253002ee7019f0d8ed1c56cfb7236b16377bf4682568b94\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"09518e3e8e47a4e6c0aedfcad8771e01950153b8f6b4b96e90921c667addb197\"" Jan 13 21:36:39.885273 containerd[1434]: time="2025-01-13T21:36:39.884110707Z" level=info msg="StartContainer for \"09518e3e8e47a4e6c0aedfcad8771e01950153b8f6b4b96e90921c667addb197\"" Jan 13 21:36:39.911426 systemd[1]: Started cri-containerd-09518e3e8e47a4e6c0aedfcad8771e01950153b8f6b4b96e90921c667addb197.scope - libcontainer container 09518e3e8e47a4e6c0aedfcad8771e01950153b8f6b4b96e90921c667addb197. Jan 13 21:36:39.929325 kubelet[2550]: E0113 21:36:39.928417 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k28zl" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" Jan 13 21:36:39.951478 containerd[1434]: time="2025-01-13T21:36:39.951423422Z" level=info msg="StartContainer for \"09518e3e8e47a4e6c0aedfcad8771e01950153b8f6b4b96e90921c667addb197\" returns successfully" Jan 13 21:36:40.001207 kubelet[2550]: E0113 21:36:40.001180 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:40.015404 kubelet[2550]: E0113 21:36:40.015375 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.015404 kubelet[2550]: W0113 21:36:40.015398 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.015574 kubelet[2550]: E0113 21:36:40.015427 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.015682 kubelet[2550]: E0113 21:36:40.015670 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.015682 kubelet[2550]: W0113 21:36:40.015681 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.015746 kubelet[2550]: E0113 21:36:40.015690 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.015852 kubelet[2550]: E0113 21:36:40.015842 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.015852 kubelet[2550]: W0113 21:36:40.015852 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.015915 kubelet[2550]: E0113 21:36:40.015860 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.016005 kubelet[2550]: E0113 21:36:40.015995 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.016005 kubelet[2550]: W0113 21:36:40.016004 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.016067 kubelet[2550]: E0113 21:36:40.016012 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.016179 kubelet[2550]: E0113 21:36:40.016169 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.016179 kubelet[2550]: W0113 21:36:40.016178 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.016261 kubelet[2550]: E0113 21:36:40.016186 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.016362 kubelet[2550]: E0113 21:36:40.016353 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.016362 kubelet[2550]: W0113 21:36:40.016362 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.016433 kubelet[2550]: E0113 21:36:40.016370 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.017295 kubelet[2550]: E0113 21:36:40.017184 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.017295 kubelet[2550]: W0113 21:36:40.017204 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.017295 kubelet[2550]: E0113 21:36:40.017220 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.018032 kubelet[2550]: E0113 21:36:40.017563 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.018032 kubelet[2550]: W0113 21:36:40.017925 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.018032 kubelet[2550]: E0113 21:36:40.017953 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.019605 kubelet[2550]: E0113 21:36:40.018330 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.019605 kubelet[2550]: W0113 21:36:40.018432 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.019605 kubelet[2550]: E0113 21:36:40.018447 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.019605 kubelet[2550]: E0113 21:36:40.018667 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.019605 kubelet[2550]: W0113 21:36:40.018677 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.019605 kubelet[2550]: E0113 21:36:40.018687 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.019605 kubelet[2550]: E0113 21:36:40.018849 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.019605 kubelet[2550]: W0113 21:36:40.018857 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.019605 kubelet[2550]: E0113 21:36:40.018865 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.019605 kubelet[2550]: E0113 21:36:40.019014 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.019891 kubelet[2550]: W0113 21:36:40.019021 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.019891 kubelet[2550]: E0113 21:36:40.019029 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.019891 kubelet[2550]: I0113 21:36:40.019072 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bd94fdf8-8gnwf" podStartSLOduration=1.394194363 podStartE2EDuration="3.019059218s" podCreationTimestamp="2025-01-13 21:36:37 +0000 UTC" firstStartedPulling="2025-01-13 21:36:38.177044084 +0000 UTC m=+27.321979568" lastFinishedPulling="2025-01-13 21:36:39.801908819 +0000 UTC m=+28.946844423" observedRunningTime="2025-01-13 21:36:40.018001544 +0000 UTC m=+29.162937068" watchObservedRunningTime="2025-01-13 21:36:40.019059218 +0000 UTC m=+29.163994742" Jan 13 21:36:40.019891 kubelet[2550]: E0113 21:36:40.019172 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.019891 kubelet[2550]: W0113 21:36:40.019180 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.019891 kubelet[2550]: E0113 21:36:40.019187 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.019891 kubelet[2550]: E0113 21:36:40.019475 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.019891 kubelet[2550]: W0113 21:36:40.019486 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.020849 kubelet[2550]: E0113 21:36:40.019496 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.020849 kubelet[2550]: E0113 21:36:40.019681 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.020849 kubelet[2550]: W0113 21:36:40.019690 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.020849 kubelet[2550]: E0113 21:36:40.019705 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.024290 kubelet[2550]: E0113 21:36:40.024075 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.024290 kubelet[2550]: W0113 21:36:40.024120 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.024290 kubelet[2550]: E0113 21:36:40.024137 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.024712 kubelet[2550]: E0113 21:36:40.024591 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.024712 kubelet[2550]: W0113 21:36:40.024604 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.024712 kubelet[2550]: E0113 21:36:40.024621 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.024947 kubelet[2550]: E0113 21:36:40.024927 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.024947 kubelet[2550]: W0113 21:36:40.024944 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.025028 kubelet[2550]: E0113 21:36:40.024961 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.025245 kubelet[2550]: E0113 21:36:40.025224 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.025297 kubelet[2550]: W0113 21:36:40.025281 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.025297 kubelet[2550]: E0113 21:36:40.025302 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.025618 kubelet[2550]: E0113 21:36:40.025604 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.025752 kubelet[2550]: W0113 21:36:40.025707 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.026007 kubelet[2550]: E0113 21:36:40.025941 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.026128 kubelet[2550]: E0113 21:36:40.026113 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.026344 kubelet[2550]: W0113 21:36:40.026170 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.026344 kubelet[2550]: E0113 21:36:40.026193 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.026508 kubelet[2550]: E0113 21:36:40.026493 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.026563 kubelet[2550]: W0113 21:36:40.026552 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.026682 kubelet[2550]: E0113 21:36:40.026625 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.026949 kubelet[2550]: E0113 21:36:40.026861 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.026949 kubelet[2550]: W0113 21:36:40.026873 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.026949 kubelet[2550]: E0113 21:36:40.026902 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.027185 kubelet[2550]: E0113 21:36:40.027127 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.027185 kubelet[2550]: W0113 21:36:40.027139 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.027293 kubelet[2550]: E0113 21:36:40.027186 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.027608 kubelet[2550]: E0113 21:36:40.027495 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.027608 kubelet[2550]: W0113 21:36:40.027509 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.027608 kubelet[2550]: E0113 21:36:40.027583 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.028215 kubelet[2550]: E0113 21:36:40.027963 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.028215 kubelet[2550]: W0113 21:36:40.027977 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.028215 kubelet[2550]: E0113 21:36:40.027993 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.028634 kubelet[2550]: E0113 21:36:40.028541 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.028634 kubelet[2550]: W0113 21:36:40.028556 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.028772 kubelet[2550]: E0113 21:36:40.028736 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.028891 kubelet[2550]: E0113 21:36:40.028845 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.028891 kubelet[2550]: W0113 21:36:40.028855 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.029146 kubelet[2550]: E0113 21:36:40.028990 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.029401 kubelet[2550]: E0113 21:36:40.029384 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.029558 kubelet[2550]: W0113 21:36:40.029456 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.029558 kubelet[2550]: E0113 21:36:40.029475 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.029944 kubelet[2550]: E0113 21:36:40.029856 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.029944 kubelet[2550]: W0113 21:36:40.029871 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.029944 kubelet[2550]: E0113 21:36:40.029882 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.030665 kubelet[2550]: E0113 21:36:40.030308 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.030665 kubelet[2550]: W0113 21:36:40.030333 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.030665 kubelet[2550]: E0113 21:36:40.030366 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.030665 kubelet[2550]: E0113 21:36:40.030650 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.030665 kubelet[2550]: W0113 21:36:40.030665 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.030838 kubelet[2550]: E0113 21:36:40.030695 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:40.031315 kubelet[2550]: E0113 21:36:40.031297 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:40.031365 kubelet[2550]: W0113 21:36:40.031315 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:40.031365 kubelet[2550]: E0113 21:36:40.031339 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.003586 kubelet[2550]: I0113 21:36:41.003532 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:36:41.004145 kubelet[2550]: E0113 21:36:41.004111 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:41.028811 kubelet[2550]: E0113 21:36:41.028685 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.028811 kubelet[2550]: W0113 21:36:41.028707 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.028811 kubelet[2550]: E0113 21:36:41.028724 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.030270 kubelet[2550]: E0113 21:36:41.029215 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.030270 kubelet[2550]: W0113 21:36:41.029233 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.030270 kubelet[2550]: E0113 21:36:41.029256 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.030954 kubelet[2550]: E0113 21:36:41.030913 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.030954 kubelet[2550]: W0113 21:36:41.030931 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.030954 kubelet[2550]: E0113 21:36:41.030944 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.032486 kubelet[2550]: E0113 21:36:41.032429 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.032486 kubelet[2550]: W0113 21:36:41.032447 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.032486 kubelet[2550]: E0113 21:36:41.032460 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.032725 kubelet[2550]: E0113 21:36:41.032711 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.032756 kubelet[2550]: W0113 21:36:41.032725 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.032756 kubelet[2550]: E0113 21:36:41.032741 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.032954 kubelet[2550]: E0113 21:36:41.032938 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.032987 kubelet[2550]: W0113 21:36:41.032954 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.032987 kubelet[2550]: E0113 21:36:41.032966 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.033168 kubelet[2550]: E0113 21:36:41.033152 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.033168 kubelet[2550]: W0113 21:36:41.033166 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.033219 kubelet[2550]: E0113 21:36:41.033177 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.033406 kubelet[2550]: E0113 21:36:41.033389 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.033406 kubelet[2550]: W0113 21:36:41.033403 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.033458 kubelet[2550]: E0113 21:36:41.033412 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.033603 kubelet[2550]: E0113 21:36:41.033588 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.033631 kubelet[2550]: W0113 21:36:41.033607 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.033631 kubelet[2550]: E0113 21:36:41.033618 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.033978 kubelet[2550]: E0113 21:36:41.033956 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.033978 kubelet[2550]: W0113 21:36:41.033971 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.034051 kubelet[2550]: E0113 21:36:41.033982 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.035272 kubelet[2550]: E0113 21:36:41.035088 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.035272 kubelet[2550]: W0113 21:36:41.035104 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.035272 kubelet[2550]: E0113 21:36:41.035117 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.037340 kubelet[2550]: E0113 21:36:41.036669 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.037340 kubelet[2550]: W0113 21:36:41.036686 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.037340 kubelet[2550]: E0113 21:36:41.036699 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.037340 kubelet[2550]: E0113 21:36:41.037307 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.037340 kubelet[2550]: W0113 21:36:41.037317 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.037340 kubelet[2550]: E0113 21:36:41.037328 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.037562 kubelet[2550]: E0113 21:36:41.037508 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.037562 kubelet[2550]: W0113 21:36:41.037516 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.037562 kubelet[2550]: E0113 21:36:41.037524 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.037722 kubelet[2550]: E0113 21:36:41.037708 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.037722 kubelet[2550]: W0113 21:36:41.037720 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.037783 kubelet[2550]: E0113 21:36:41.037729 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.038001 kubelet[2550]: E0113 21:36:41.037986 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.038001 kubelet[2550]: W0113 21:36:41.037999 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.038071 kubelet[2550]: E0113 21:36:41.038009 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.038201 kubelet[2550]: E0113 21:36:41.038182 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.038201 kubelet[2550]: W0113 21:36:41.038192 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.038201 kubelet[2550]: E0113 21:36:41.038201 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.038529 kubelet[2550]: E0113 21:36:41.038508 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.038529 kubelet[2550]: W0113 21:36:41.038527 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.038725 kubelet[2550]: E0113 21:36:41.038550 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.038761 kubelet[2550]: E0113 21:36:41.038739 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.038761 kubelet[2550]: W0113 21:36:41.038748 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.038761 kubelet[2550]: E0113 21:36:41.038757 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.038908 kubelet[2550]: E0113 21:36:41.038895 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.038908 kubelet[2550]: W0113 21:36:41.038906 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.038960 kubelet[2550]: E0113 21:36:41.038916 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.039084 kubelet[2550]: E0113 21:36:41.039074 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.039134 kubelet[2550]: W0113 21:36:41.039121 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.039165 kubelet[2550]: E0113 21:36:41.039138 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.039445 kubelet[2550]: E0113 21:36:41.039426 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.039614 kubelet[2550]: W0113 21:36:41.039527 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.039614 kubelet[2550]: E0113 21:36:41.039557 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.039792 kubelet[2550]: E0113 21:36:41.039778 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.039792 kubelet[2550]: W0113 21:36:41.039791 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.039847 kubelet[2550]: E0113 21:36:41.039807 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.039998 kubelet[2550]: E0113 21:36:41.039989 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.039998 kubelet[2550]: W0113 21:36:41.039998 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.040063 kubelet[2550]: E0113 21:36:41.040010 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.040184 kubelet[2550]: E0113 21:36:41.040173 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.040219 kubelet[2550]: W0113 21:36:41.040184 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.040219 kubelet[2550]: E0113 21:36:41.040198 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.040416 kubelet[2550]: E0113 21:36:41.040405 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.040416 kubelet[2550]: W0113 21:36:41.040416 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.040473 kubelet[2550]: E0113 21:36:41.040429 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.040802 kubelet[2550]: E0113 21:36:41.040677 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.040802 kubelet[2550]: W0113 21:36:41.040690 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.040802 kubelet[2550]: E0113 21:36:41.040708 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.041178 kubelet[2550]: E0113 21:36:41.041101 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.041178 kubelet[2550]: W0113 21:36:41.041114 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.041178 kubelet[2550]: E0113 21:36:41.041132 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.041438 kubelet[2550]: E0113 21:36:41.041422 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.041468 kubelet[2550]: W0113 21:36:41.041438 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.041468 kubelet[2550]: E0113 21:36:41.041454 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.041660 kubelet[2550]: E0113 21:36:41.041648 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.041660 kubelet[2550]: W0113 21:36:41.041658 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.041707 kubelet[2550]: E0113 21:36:41.041670 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.041855 kubelet[2550]: E0113 21:36:41.041837 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.041855 kubelet[2550]: W0113 21:36:41.041847 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.041918 kubelet[2550]: E0113 21:36:41.041889 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.042017 kubelet[2550]: E0113 21:36:41.042006 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.042039 kubelet[2550]: W0113 21:36:41.042016 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.042039 kubelet[2550]: E0113 21:36:41.042024 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.042296 kubelet[2550]: E0113 21:36:41.042278 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:36:41.042296 kubelet[2550]: W0113 21:36:41.042293 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:36:41.042362 kubelet[2550]: E0113 21:36:41.042304 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:36:41.050909 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:52976.service - OpenSSH per-connection server daemon (10.0.0.1:52976). Jan 13 21:36:41.098136 sshd[3255]: Accepted publickey for core from 10.0.0.1 port 52976 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:36:41.099777 sshd[3255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:36:41.103753 systemd-logind[1415]: New session 8 of user core. Jan 13 21:36:41.112421 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:36:41.242644 sshd[3255]: pam_unix(sshd:session): session closed for user core Jan 13 21:36:41.245611 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:36:41.245888 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:52976.service: Deactivated successfully. Jan 13 21:36:41.247427 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:36:41.248957 systemd-logind[1415]: Removed session 8. Jan 13 21:36:41.845312 containerd[1434]: time="2025-01-13T21:36:41.845009890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:41.845878 containerd[1434]: time="2025-01-13T21:36:41.845849440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 13 21:36:41.846570 containerd[1434]: time="2025-01-13T21:36:41.846539923Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:41.848681 containerd[1434]: time="2025-01-13T21:36:41.848652459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:41.849459 containerd[1434]: time="2025-01-13T21:36:41.849424637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 2.047028885s" Jan 13 21:36:41.849527 containerd[1434]: time="2025-01-13T21:36:41.849460003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 21:36:41.852522 containerd[1434]: time="2025-01-13T21:36:41.852476100Z" level=info msg="CreateContainer within sandbox \"93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:36:41.867253 containerd[1434]: time="2025-01-13T21:36:41.867201723Z" level=info msg="CreateContainer within sandbox \"93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7\"" Jan 13 21:36:41.867700 containerd[1434]: time="2025-01-13T21:36:41.867662445Z" level=info msg="StartContainer for \"408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7\"" Jan 13 21:36:41.894430 systemd[1]: Started cri-containerd-408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7.scope - libcontainer container 408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7. Jan 13 21:36:41.923473 containerd[1434]: time="2025-01-13T21:36:41.923426658Z" level=info msg="StartContainer for \"408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7\" returns successfully" Jan 13 21:36:41.928252 kubelet[2550]: E0113 21:36:41.928057 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k28zl" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" Jan 13 21:36:41.960009 systemd[1]: cri-containerd-408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7.scope: Deactivated successfully. Jan 13 21:36:41.981216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7-rootfs.mount: Deactivated successfully. Jan 13 21:36:41.996583 containerd[1434]: time="2025-01-13T21:36:41.996518516Z" level=info msg="shim disconnected" id=408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7 namespace=k8s.io Jan 13 21:36:41.996583 containerd[1434]: time="2025-01-13T21:36:41.996574526Z" level=warning msg="cleaning up after shim disconnected" id=408b0388a02da2a19be16475aaba0c630ee88279d0ea66afdea7d38ea2f26df7 namespace=k8s.io Jan 13 21:36:41.996583 containerd[1434]: time="2025-01-13T21:36:41.996582688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:36:42.005382 kubelet[2550]: E0113 21:36:42.005345 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:43.009175 kubelet[2550]: E0113 21:36:43.008285 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:43.009940 containerd[1434]: time="2025-01-13T21:36:43.009003165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:36:43.928322 kubelet[2550]: E0113 21:36:43.928273 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k28zl" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" Jan 13 21:36:45.928686 kubelet[2550]: E0113 21:36:45.928584 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k28zl" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" Jan 13 21:36:46.258552 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:39352.service - OpenSSH per-connection server daemon (10.0.0.1:39352). Jan 13 21:36:46.301190 sshd[3346]: Accepted publickey for core from 10.0.0.1 port 39352 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:36:46.303759 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:36:46.310251 systemd-logind[1415]: New session 9 of user core. Jan 13 21:36:46.318292 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:36:46.471080 sshd[3346]: pam_unix(sshd:session): session closed for user core Jan 13 21:36:46.474180 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:36:46.476414 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:36:46.476563 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:39352.service: Deactivated successfully. Jan 13 21:36:46.479774 systemd-logind[1415]: Removed session 9. Jan 13 21:36:46.878561 containerd[1434]: time="2025-01-13T21:36:46.878519526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:46.879849 containerd[1434]: time="2025-01-13T21:36:46.879713673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 21:36:46.881706 containerd[1434]: time="2025-01-13T21:36:46.880727871Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:46.882875 containerd[1434]: time="2025-01-13T21:36:46.882839722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:46.883764 containerd[1434]: time="2025-01-13T21:36:46.883707458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.874670007s" Jan 13 21:36:46.883873 containerd[1434]: time="2025-01-13T21:36:46.883856121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 21:36:46.886093 containerd[1434]: time="2025-01-13T21:36:46.886067467Z" level=info msg="CreateContainer within sandbox \"93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:36:46.898508 containerd[1434]: time="2025-01-13T21:36:46.898475169Z" level=info msg="CreateContainer within sandbox \"93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d\"" Jan 13 21:36:46.899110 containerd[1434]: time="2025-01-13T21:36:46.898847588Z" level=info msg="StartContainer for \"353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d\"" Jan 13 21:36:46.933405 systemd[1]: Started cri-containerd-353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d.scope - libcontainer container 353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d. Jan 13 21:36:46.961357 containerd[1434]: time="2025-01-13T21:36:46.961314166Z" level=info msg="StartContainer for \"353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d\" returns successfully" Jan 13 21:36:47.016614 kubelet[2550]: E0113 21:36:47.016521 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:47.545785 systemd[1]: cri-containerd-353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d.scope: Deactivated successfully. Jan 13 21:36:47.564028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d-rootfs.mount: Deactivated successfully. Jan 13 21:36:47.592445 containerd[1434]: time="2025-01-13T21:36:47.592382018Z" level=info msg="shim disconnected" id=353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d namespace=k8s.io Jan 13 21:36:47.592445 containerd[1434]: time="2025-01-13T21:36:47.592439187Z" level=warning msg="cleaning up after shim disconnected" id=353813bc3dac4df8f0eeb2290576bf01f528797052f929c9d54075cac2d56a8d namespace=k8s.io Jan 13 21:36:47.592445 containerd[1434]: time="2025-01-13T21:36:47.592448508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:36:47.623425 kubelet[2550]: I0113 21:36:47.623230 2550 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:36:47.643877 kubelet[2550]: I0113 21:36:47.643091 2550 topology_manager.go:215] "Topology Admit Handler" podUID="7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4z986" Jan 13 21:36:47.643877 kubelet[2550]: I0113 21:36:47.643561 2550 topology_manager.go:215] "Topology Admit Handler" podUID="6031707c-fbd2-45fb-819f-7634d8a3b502" podNamespace="calico-system" podName="calico-kube-controllers-c685fc75-cgwpm" Jan 13 21:36:47.647002 kubelet[2550]: I0113 21:36:47.646969 2550 topology_manager.go:215] "Topology Admit Handler" podUID="56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zvxj4" Jan 13 21:36:47.648294 kubelet[2550]: I0113 21:36:47.647815 2550 topology_manager.go:215] "Topology Admit Handler" podUID="52f5badb-75da-429f-ac03-b7fa7b564ae8" podNamespace="calico-apiserver" podName="calico-apiserver-9789b5cdc-2zzjl" Jan 13 21:36:47.650725 kubelet[2550]: I0113 21:36:47.650676 2550 topology_manager.go:215] "Topology Admit Handler" podUID="df727e07-6bc8-419a-bee3-8ba5a16f82f7" podNamespace="calico-apiserver" podName="calico-apiserver-9789b5cdc-2mfqn" Jan 13 21:36:47.656366 systemd[1]: Created slice kubepods-burstable-pod7b2fa236_b9af_4d0f_a29e_6bc43e2ce6d1.slice - libcontainer container kubepods-burstable-pod7b2fa236_b9af_4d0f_a29e_6bc43e2ce6d1.slice. Jan 13 21:36:47.664077 systemd[1]: Created slice kubepods-besteffort-pod6031707c_fbd2_45fb_819f_7634d8a3b502.slice - libcontainer container kubepods-besteffort-pod6031707c_fbd2_45fb_819f_7634d8a3b502.slice. Jan 13 21:36:47.668607 systemd[1]: Created slice kubepods-besteffort-pod52f5badb_75da_429f_ac03_b7fa7b564ae8.slice - libcontainer container kubepods-besteffort-pod52f5badb_75da_429f_ac03_b7fa7b564ae8.slice. Jan 13 21:36:47.674954 systemd[1]: Created slice kubepods-burstable-pod56c848b9_0e6b_4ed8_a9ca_fc40c4a3cd84.slice - libcontainer container kubepods-burstable-pod56c848b9_0e6b_4ed8_a9ca_fc40c4a3cd84.slice. Jan 13 21:36:47.680115 systemd[1]: Created slice kubepods-besteffort-poddf727e07_6bc8_419a_bee3_8ba5a16f82f7.slice - libcontainer container kubepods-besteffort-poddf727e07_6bc8_419a_bee3_8ba5a16f82f7.slice. Jan 13 21:36:47.792317 kubelet[2550]: I0113 21:36:47.792278 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksgd\" (UniqueName: \"kubernetes.io/projected/52f5badb-75da-429f-ac03-b7fa7b564ae8-kube-api-access-dksgd\") pod \"calico-apiserver-9789b5cdc-2zzjl\" (UID: \"52f5badb-75da-429f-ac03-b7fa7b564ae8\") " pod="calico-apiserver/calico-apiserver-9789b5cdc-2zzjl" Jan 13 21:36:47.792317 kubelet[2550]: I0113 21:36:47.792320 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/df727e07-6bc8-419a-bee3-8ba5a16f82f7-calico-apiserver-certs\") pod \"calico-apiserver-9789b5cdc-2mfqn\" (UID: \"df727e07-6bc8-419a-bee3-8ba5a16f82f7\") " pod="calico-apiserver/calico-apiserver-9789b5cdc-2mfqn" Jan 13 21:36:47.792478 kubelet[2550]: I0113 21:36:47.792345 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfw69\" (UniqueName: \"kubernetes.io/projected/56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84-kube-api-access-tfw69\") pod \"coredns-7db6d8ff4d-zvxj4\" (UID: \"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84\") " pod="kube-system/coredns-7db6d8ff4d-zvxj4" Jan 13 21:36:47.792478 kubelet[2550]: I0113 21:36:47.792380 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1-config-volume\") pod \"coredns-7db6d8ff4d-4z986\" (UID: \"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1\") " pod="kube-system/coredns-7db6d8ff4d-4z986" Jan 13 21:36:47.792478 kubelet[2550]: I0113 21:36:47.792400 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52f5badb-75da-429f-ac03-b7fa7b564ae8-calico-apiserver-certs\") pod \"calico-apiserver-9789b5cdc-2zzjl\" (UID: \"52f5badb-75da-429f-ac03-b7fa7b564ae8\") " pod="calico-apiserver/calico-apiserver-9789b5cdc-2zzjl" Jan 13 21:36:47.792478 kubelet[2550]: I0113 21:36:47.792419 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf2bb\" (UniqueName: \"kubernetes.io/projected/df727e07-6bc8-419a-bee3-8ba5a16f82f7-kube-api-access-pf2bb\") pod \"calico-apiserver-9789b5cdc-2mfqn\" (UID: \"df727e07-6bc8-419a-bee3-8ba5a16f82f7\") " pod="calico-apiserver/calico-apiserver-9789b5cdc-2mfqn" Jan 13 21:36:47.792478 kubelet[2550]: I0113 21:36:47.792460 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26mlv\" (UniqueName: \"kubernetes.io/projected/7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1-kube-api-access-26mlv\") pod \"coredns-7db6d8ff4d-4z986\" (UID: \"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1\") " pod="kube-system/coredns-7db6d8ff4d-4z986" Jan 13 21:36:47.792664 kubelet[2550]: I0113 21:36:47.792509 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8bs\" (UniqueName: \"kubernetes.io/projected/6031707c-fbd2-45fb-819f-7634d8a3b502-kube-api-access-rr8bs\") pod \"calico-kube-controllers-c685fc75-cgwpm\" (UID: \"6031707c-fbd2-45fb-819f-7634d8a3b502\") " pod="calico-system/calico-kube-controllers-c685fc75-cgwpm" Jan 13 21:36:47.792664 kubelet[2550]: I0113 21:36:47.792533 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84-config-volume\") pod \"coredns-7db6d8ff4d-zvxj4\" (UID: \"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84\") " pod="kube-system/coredns-7db6d8ff4d-zvxj4" Jan 13 21:36:47.792664 kubelet[2550]: I0113 21:36:47.792581 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6031707c-fbd2-45fb-819f-7634d8a3b502-tigera-ca-bundle\") pod \"calico-kube-controllers-c685fc75-cgwpm\" (UID: \"6031707c-fbd2-45fb-819f-7634d8a3b502\") " pod="calico-system/calico-kube-controllers-c685fc75-cgwpm" Jan 13 21:36:47.933850 systemd[1]: Created slice kubepods-besteffort-podc152f2aa_4163_46d5_8b4d_dd73349b1e5d.slice - libcontainer container kubepods-besteffort-podc152f2aa_4163_46d5_8b4d_dd73349b1e5d.slice. Jan 13 21:36:47.936212 containerd[1434]: time="2025-01-13T21:36:47.936170216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k28zl,Uid:c152f2aa-4163-46d5-8b4d-dd73349b1e5d,Namespace:calico-system,Attempt:0,}" Jan 13 21:36:47.960806 kubelet[2550]: E0113 21:36:47.960695 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:47.961215 containerd[1434]: time="2025-01-13T21:36:47.961148638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4z986,Uid:7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1,Namespace:kube-system,Attempt:0,}" Jan 13 21:36:47.967816 containerd[1434]: time="2025-01-13T21:36:47.967781973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c685fc75-cgwpm,Uid:6031707c-fbd2-45fb-819f-7634d8a3b502,Namespace:calico-system,Attempt:0,}" Jan 13 21:36:47.978024 kubelet[2550]: E0113 21:36:47.977547 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:47.985372 containerd[1434]: time="2025-01-13T21:36:47.982509066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2zzjl,Uid:52f5badb-75da-429f-ac03-b7fa7b564ae8,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:36:47.985372 containerd[1434]: time="2025-01-13T21:36:47.982757184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zvxj4,Uid:56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84,Namespace:kube-system,Attempt:0,}" Jan 13 21:36:47.989220 containerd[1434]: time="2025-01-13T21:36:47.986044527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2mfqn,Uid:df727e07-6bc8-419a-bee3-8ba5a16f82f7,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:36:48.028295 kubelet[2550]: E0113 21:36:48.028052 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:48.033691 containerd[1434]: time="2025-01-13T21:36:48.033428753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:36:48.259953 containerd[1434]: time="2025-01-13T21:36:48.259815278Z" level=error msg="Failed to destroy network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.260734 containerd[1434]: time="2025-01-13T21:36:48.260415848Z" level=error msg="Failed to destroy network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.262594 containerd[1434]: time="2025-01-13T21:36:48.262549407Z" level=error msg="encountered an error cleaning up failed sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.262686 containerd[1434]: time="2025-01-13T21:36:48.262621258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2zzjl,Uid:52f5badb-75da-429f-ac03-b7fa7b564ae8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.263363 containerd[1434]: time="2025-01-13T21:36:48.263320363Z" level=error msg="encountered an error cleaning up failed sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.263432 containerd[1434]: time="2025-01-13T21:36:48.263383692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2mfqn,Uid:df727e07-6bc8-419a-bee3-8ba5a16f82f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.266538 kubelet[2550]: E0113 21:36:48.266333 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.266538 kubelet[2550]: E0113 21:36:48.266418 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9789b5cdc-2mfqn" Jan 13 21:36:48.266538 kubelet[2550]: E0113 21:36:48.266437 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9789b5cdc-2mfqn" Jan 13 21:36:48.266689 kubelet[2550]: E0113 21:36:48.266484 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9789b5cdc-2mfqn_calico-apiserver(df727e07-6bc8-419a-bee3-8ba5a16f82f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9789b5cdc-2mfqn_calico-apiserver(df727e07-6bc8-419a-bee3-8ba5a16f82f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9789b5cdc-2mfqn" podUID="df727e07-6bc8-419a-bee3-8ba5a16f82f7" Jan 13 21:36:48.266991 kubelet[2550]: E0113 21:36:48.266770 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.266991 kubelet[2550]: E0113 21:36:48.266819 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9789b5cdc-2zzjl" Jan 13 21:36:48.266991 kubelet[2550]: E0113 21:36:48.266836 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9789b5cdc-2zzjl" Jan 13 21:36:48.267086 kubelet[2550]: E0113 21:36:48.266874 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9789b5cdc-2zzjl_calico-apiserver(52f5badb-75da-429f-ac03-b7fa7b564ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9789b5cdc-2zzjl_calico-apiserver(52f5badb-75da-429f-ac03-b7fa7b564ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9789b5cdc-2zzjl" podUID="52f5badb-75da-429f-ac03-b7fa7b564ae8" Jan 13 21:36:48.271375 containerd[1434]: time="2025-01-13T21:36:48.271334282Z" level=error msg="Failed to destroy network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.271705 containerd[1434]: time="2025-01-13T21:36:48.271675213Z" level=error msg="encountered an error cleaning up failed sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.271764 containerd[1434]: time="2025-01-13T21:36:48.271740823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4z986,Uid:7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.271978 kubelet[2550]: E0113 21:36:48.271938 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.272027 kubelet[2550]: E0113 21:36:48.271988 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4z986" Jan 13 21:36:48.272027 kubelet[2550]: E0113 21:36:48.272004 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4z986" Jan 13 21:36:48.272081 kubelet[2550]: E0113 21:36:48.272037 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4z986_kube-system(7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4z986_kube-system(7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4z986" podUID="7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1" Jan 13 21:36:48.274557 containerd[1434]: time="2025-01-13T21:36:48.274522159Z" level=error msg="Failed to destroy network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.274869 containerd[1434]: time="2025-01-13T21:36:48.274836286Z" level=error msg="encountered an error cleaning up failed sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.274926 containerd[1434]: time="2025-01-13T21:36:48.274881533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c685fc75-cgwpm,Uid:6031707c-fbd2-45fb-819f-7634d8a3b502,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.275265 kubelet[2550]: E0113 21:36:48.275039 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.275265 kubelet[2550]: E0113 21:36:48.275085 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c685fc75-cgwpm" Jan 13 21:36:48.275265 kubelet[2550]: E0113 21:36:48.275100 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c685fc75-cgwpm" Jan 13 21:36:48.276435 kubelet[2550]: E0113 21:36:48.275141 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c685fc75-cgwpm_calico-system(6031707c-fbd2-45fb-819f-7634d8a3b502)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c685fc75-cgwpm_calico-system(6031707c-fbd2-45fb-819f-7634d8a3b502)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c685fc75-cgwpm" podUID="6031707c-fbd2-45fb-819f-7634d8a3b502" Jan 13 21:36:48.281578 containerd[1434]: time="2025-01-13T21:36:48.281535969Z" level=error msg="Failed to destroy network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.281857 containerd[1434]: time="2025-01-13T21:36:48.281831293Z" level=error msg="encountered an error cleaning up failed sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.281897 containerd[1434]: time="2025-01-13T21:36:48.281878100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k28zl,Uid:c152f2aa-4163-46d5-8b4d-dd73349b1e5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.282079 kubelet[2550]: E0113 21:36:48.282049 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.282124 kubelet[2550]: E0113 21:36:48.282098 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k28zl" Jan 13 21:36:48.282124 kubelet[2550]: E0113 21:36:48.282117 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k28zl" Jan 13 21:36:48.282188 kubelet[2550]: E0113 21:36:48.282153 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k28zl_calico-system(c152f2aa-4163-46d5-8b4d-dd73349b1e5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k28zl_calico-system(c152f2aa-4163-46d5-8b4d-dd73349b1e5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k28zl" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" Jan 13 21:36:48.285036 containerd[1434]: time="2025-01-13T21:36:48.284993726Z" level=error msg="Failed to destroy network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.285341 containerd[1434]: time="2025-01-13T21:36:48.285313374Z" level=error msg="encountered an error cleaning up failed sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.285395 containerd[1434]: time="2025-01-13T21:36:48.285371743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zvxj4,Uid:56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.285575 kubelet[2550]: E0113 21:36:48.285544 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:48.285627 kubelet[2550]: E0113 21:36:48.285590 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zvxj4" Jan 13 21:36:48.285627 kubelet[2550]: E0113 21:36:48.285607 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zvxj4" Jan 13 21:36:48.285681 kubelet[2550]: E0113 21:36:48.285652 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zvxj4_kube-system(56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zvxj4_kube-system(56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zvxj4" podUID="56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84" Jan 13 21:36:48.900145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b-shm.mount: Deactivated successfully. Jan 13 21:36:49.030356 kubelet[2550]: I0113 21:36:49.030317 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:36:49.031137 containerd[1434]: time="2025-01-13T21:36:49.031086745Z" level=info msg="StopPodSandbox for \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\"" Jan 13 21:36:49.031382 containerd[1434]: time="2025-01-13T21:36:49.031270772Z" level=info msg="Ensure that sandbox dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb in task-service has been cleanup successfully" Jan 13 21:36:49.033370 kubelet[2550]: I0113 21:36:49.033265 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:36:49.034100 containerd[1434]: time="2025-01-13T21:36:49.034065022Z" level=info msg="StopPodSandbox for \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\"" Jan 13 21:36:49.034667 containerd[1434]: time="2025-01-13T21:36:49.034287334Z" level=info msg="Ensure that sandbox 6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783 in task-service has been cleanup successfully" Jan 13 21:36:49.035938 kubelet[2550]: I0113 21:36:49.035911 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:36:49.036897 containerd[1434]: time="2025-01-13T21:36:49.036870513Z" level=info msg="StopPodSandbox for \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\"" Jan 13 21:36:49.037347 containerd[1434]: time="2025-01-13T21:36:49.037024896Z" level=info msg="Ensure that sandbox 5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21 in task-service has been cleanup successfully" Jan 13 21:36:49.038867 kubelet[2550]: I0113 21:36:49.037880 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:36:49.038952 containerd[1434]: time="2025-01-13T21:36:49.038568242Z" level=info msg="StopPodSandbox for \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\"" Jan 13 21:36:49.038952 containerd[1434]: time="2025-01-13T21:36:49.038812638Z" level=info msg="Ensure that sandbox 2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b in task-service has been cleanup successfully" Jan 13 21:36:49.042039 kubelet[2550]: I0113 21:36:49.041504 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:36:49.042202 containerd[1434]: time="2025-01-13T21:36:49.042172210Z" level=info msg="StopPodSandbox for \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\"" Jan 13 21:36:49.043050 containerd[1434]: time="2025-01-13T21:36:49.043009853Z" level=info msg="Ensure that sandbox 891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9 in task-service has been cleanup successfully" Jan 13 21:36:49.044703 kubelet[2550]: I0113 21:36:49.044648 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:36:49.046691 containerd[1434]: time="2025-01-13T21:36:49.046372746Z" level=info msg="StopPodSandbox for \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\"" Jan 13 21:36:49.046801 containerd[1434]: time="2025-01-13T21:36:49.046774845Z" level=info msg="Ensure that sandbox 42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7 in task-service has been cleanup successfully" Jan 13 21:36:49.090991 containerd[1434]: time="2025-01-13T21:36:49.090757131Z" level=error msg="StopPodSandbox for \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\" failed" error="failed to destroy network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:49.091444 kubelet[2550]: E0113 21:36:49.091387 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:36:49.091525 kubelet[2550]: E0113 21:36:49.091459 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7"} Jan 13 21:36:49.091572 kubelet[2550]: E0113 21:36:49.091539 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6031707c-fbd2-45fb-819f-7634d8a3b502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:36:49.091637 kubelet[2550]: E0113 21:36:49.091572 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6031707c-fbd2-45fb-819f-7634d8a3b502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c685fc75-cgwpm" podUID="6031707c-fbd2-45fb-819f-7634d8a3b502" Jan 13 21:36:49.094842 containerd[1434]: time="2025-01-13T21:36:49.094796123Z" level=error msg="StopPodSandbox for \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\" failed" error="failed to destroy network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:49.095050 kubelet[2550]: E0113 21:36:49.095013 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:36:49.095097 kubelet[2550]: E0113 21:36:49.095061 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b"} Jan 13 21:36:49.095123 kubelet[2550]: E0113 21:36:49.095098 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c152f2aa-4163-46d5-8b4d-dd73349b1e5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:36:49.095181 kubelet[2550]: E0113 21:36:49.095121 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c152f2aa-4163-46d5-8b4d-dd73349b1e5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k28zl" podUID="c152f2aa-4163-46d5-8b4d-dd73349b1e5d" Jan 13 21:36:49.096645 containerd[1434]: time="2025-01-13T21:36:49.096568623Z" level=error msg="StopPodSandbox for \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\" failed" error="failed to destroy network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:49.096889 kubelet[2550]: E0113 21:36:49.096853 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:36:49.096953 kubelet[2550]: E0113 21:36:49.096898 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9"} Jan 13 21:36:49.096953 kubelet[2550]: E0113 21:36:49.096937 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df727e07-6bc8-419a-bee3-8ba5a16f82f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:36:49.097026 kubelet[2550]: E0113 21:36:49.096956 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df727e07-6bc8-419a-bee3-8ba5a16f82f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9789b5cdc-2mfqn" podUID="df727e07-6bc8-419a-bee3-8ba5a16f82f7" Jan 13 21:36:49.104539 containerd[1434]: time="2025-01-13T21:36:49.103535684Z" level=error msg="StopPodSandbox for \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\" failed" error="failed to destroy network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:49.104646 kubelet[2550]: E0113 21:36:49.103797 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:36:49.104646 kubelet[2550]: E0113 21:36:49.103840 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb"} Jan 13 21:36:49.104646 kubelet[2550]: E0113 21:36:49.103870 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:36:49.104646 kubelet[2550]: E0113 21:36:49.103894 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4z986" podUID="7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1" Jan 13 21:36:49.105221 containerd[1434]: time="2025-01-13T21:36:49.105163082Z" level=error msg="StopPodSandbox for \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\" failed" error="failed to destroy network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:49.105302 containerd[1434]: time="2025-01-13T21:36:49.105163882Z" level=error msg="StopPodSandbox for \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\" failed" error="failed to destroy network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:36:49.105610 kubelet[2550]: E0113 21:36:49.105436 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:36:49.105610 kubelet[2550]: E0113 21:36:49.105470 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783"} Jan 13 21:36:49.105610 kubelet[2550]: E0113 21:36:49.105496 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:36:49.105610 kubelet[2550]: E0113 21:36:49.105517 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zvxj4" podUID="56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84" Jan 13 21:36:49.105782 kubelet[2550]: E0113 21:36:49.105541 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:36:49.105782 kubelet[2550]: E0113 21:36:49.105555 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21"} Jan 13 21:36:49.105922 kubelet[2550]: E0113 21:36:49.105866 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52f5badb-75da-429f-ac03-b7fa7b564ae8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:36:49.105922 kubelet[2550]: E0113 21:36:49.105899 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52f5badb-75da-429f-ac03-b7fa7b564ae8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9789b5cdc-2zzjl" podUID="52f5badb-75da-429f-ac03-b7fa7b564ae8" Jan 13 21:36:50.762058 kubelet[2550]: I0113 21:36:50.761936 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:36:50.763263 kubelet[2550]: E0113 21:36:50.762792 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:51.050755 kubelet[2550]: E0113 21:36:51.050362 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:51.500537 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:39360.service - OpenSSH per-connection server daemon (10.0.0.1:39360). Jan 13 21:36:51.543641 sshd[3800]: Accepted publickey for core from 10.0.0.1 port 39360 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:36:51.544545 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:36:51.548730 systemd-logind[1415]: New session 10 of user core. Jan 13 21:36:51.557394 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:36:51.691690 sshd[3800]: pam_unix(sshd:session): session closed for user core Jan 13 21:36:51.701757 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:39360.service: Deactivated successfully. Jan 13 21:36:51.703676 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:36:51.707435 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:36:51.709524 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:39376.service - OpenSSH per-connection server daemon (10.0.0.1:39376). Jan 13 21:36:51.710536 systemd-logind[1415]: Removed session 10. Jan 13 21:36:51.762811 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 39376 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:36:51.764878 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:36:51.772191 systemd-logind[1415]: New session 11 of user core. Jan 13 21:36:51.778421 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:36:51.950283 sshd[3815]: pam_unix(sshd:session): session closed for user core Jan 13 21:36:51.960516 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:39376.service: Deactivated successfully. Jan 13 21:36:51.965884 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:36:51.969166 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:36:51.979888 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:39388.service - OpenSSH per-connection server daemon (10.0.0.1:39388). Jan 13 21:36:51.982550 systemd-logind[1415]: Removed session 11. Jan 13 21:36:52.018297 sshd[3831]: Accepted publickey for core from 10.0.0.1 port 39388 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:36:52.020309 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:36:52.026062 systemd-logind[1415]: New session 12 of user core. Jan 13 21:36:52.032430 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:36:52.046566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977108434.mount: Deactivated successfully. Jan 13 21:36:52.386053 sshd[3831]: pam_unix(sshd:session): session closed for user core Jan 13 21:36:52.389534 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:39388.service: Deactivated successfully. Jan 13 21:36:52.391477 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:36:52.392741 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:36:52.395259 systemd-logind[1415]: Removed session 12. Jan 13 21:36:52.514985 containerd[1434]: time="2025-01-13T21:36:52.514895472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:52.526639 containerd[1434]: time="2025-01-13T21:36:52.526590650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 21:36:52.527568 containerd[1434]: time="2025-01-13T21:36:52.527538301Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:52.531218 containerd[1434]: time="2025-01-13T21:36:52.531177405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:36:52.532000 containerd[1434]: time="2025-01-13T21:36:52.531772767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.498154426s" Jan 13 21:36:52.532000 containerd[1434]: time="2025-01-13T21:36:52.531811332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 21:36:52.541272 containerd[1434]: time="2025-01-13T21:36:52.540340192Z" level=info msg="CreateContainer within sandbox \"93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:36:52.571504 containerd[1434]: time="2025-01-13T21:36:52.571412131Z" level=info msg="CreateContainer within sandbox \"93514347ed94e86c4fe9a1c6caf54b318a607c7222735a975e57dafb92e14e7b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8b51c4bf13dc469d711cf27da7e7ca46813cccf3fdc0e48c44ab498a278e248c\"" Jan 13 21:36:52.572039 containerd[1434]: time="2025-01-13T21:36:52.572015574Z" level=info msg="StartContainer for \"8b51c4bf13dc469d711cf27da7e7ca46813cccf3fdc0e48c44ab498a278e248c\"" Jan 13 21:36:52.636435 systemd[1]: Started cri-containerd-8b51c4bf13dc469d711cf27da7e7ca46813cccf3fdc0e48c44ab498a278e248c.scope - libcontainer container 8b51c4bf13dc469d711cf27da7e7ca46813cccf3fdc0e48c44ab498a278e248c. Jan 13 21:36:52.679233 containerd[1434]: time="2025-01-13T21:36:52.679143315Z" level=info msg="StartContainer for \"8b51c4bf13dc469d711cf27da7e7ca46813cccf3fdc0e48c44ab498a278e248c\" returns successfully" Jan 13 21:36:52.835616 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:36:52.835723 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:36:53.058443 kubelet[2550]: E0113 21:36:53.058401 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:54.057956 kubelet[2550]: E0113 21:36:54.057919 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:36:54.296272 kernel: bpftool[4082]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:36:54.475588 systemd-networkd[1368]: vxlan.calico: Link UP Jan 13 21:36:54.475598 systemd-networkd[1368]: vxlan.calico: Gained carrier Jan 13 21:36:56.257488 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Jan 13 21:36:57.401751 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:36998.service - OpenSSH per-connection server daemon (10.0.0.1:36998). Jan 13 21:36:57.446807 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 36998 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:36:57.448482 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:36:57.452704 systemd-logind[1415]: New session 13 of user core. Jan 13 21:36:57.462424 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:36:57.625403 sshd[4163]: pam_unix(sshd:session): session closed for user core Jan 13 21:36:57.628539 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:36998.service: Deactivated successfully. Jan 13 21:36:57.631850 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:36:57.632412 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:36:57.633292 systemd-logind[1415]: Removed session 13. Jan 13 21:36:59.929910 containerd[1434]: time="2025-01-13T21:36:59.929778473Z" level=info msg="StopPodSandbox for \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\"" Jan 13 21:37:00.020420 kubelet[2550]: I0113 21:37:00.020361 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h89v9" podStartSLOduration=8.784128076 podStartE2EDuration="23.020345787s" podCreationTimestamp="2025-01-13 21:36:37 +0000 UTC" firstStartedPulling="2025-01-13 21:36:38.296318722 +0000 UTC m=+27.441254246" lastFinishedPulling="2025-01-13 21:36:52.532536473 +0000 UTC m=+41.677471957" observedRunningTime="2025-01-13 21:36:53.073532743 +0000 UTC m=+42.218468267" watchObservedRunningTime="2025-01-13 21:37:00.020345787 +0000 UTC m=+49.165281311" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.019 [INFO][4200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.020 [INFO][4200] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" iface="eth0" netns="/var/run/netns/cni-7ee7c126-52cc-3ba0-ecd1-73a999c7ead0" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.021 [INFO][4200] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" iface="eth0" netns="/var/run/netns/cni-7ee7c126-52cc-3ba0-ecd1-73a999c7ead0" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.021 [INFO][4200] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" iface="eth0" netns="/var/run/netns/cni-7ee7c126-52cc-3ba0-ecd1-73a999c7ead0" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.021 [INFO][4200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.021 [INFO][4200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.104 [INFO][4207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.104 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.104 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.114 [WARNING][4207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.114 [INFO][4207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.116 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:00.121185 containerd[1434]: 2025-01-13 21:37:00.117 [INFO][4200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:00.121754 containerd[1434]: time="2025-01-13T21:37:00.121557219Z" level=info msg="TearDown network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\" successfully" Jan 13 21:37:00.121754 containerd[1434]: time="2025-01-13T21:37:00.121589343Z" level=info msg="StopPodSandbox for \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\" returns successfully" Jan 13 21:37:00.122442 containerd[1434]: time="2025-01-13T21:37:00.122406963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c685fc75-cgwpm,Uid:6031707c-fbd2-45fb-819f-7634d8a3b502,Namespace:calico-system,Attempt:1,}" Jan 13 21:37:00.123961 systemd[1]: run-netns-cni\x2d7ee7c126\x2d52cc\x2d3ba0\x2decd1\x2d73a999c7ead0.mount: Deactivated successfully. Jan 13 21:37:00.249535 systemd-networkd[1368]: cali034153ba86e: Link UP Jan 13 21:37:00.249725 systemd-networkd[1368]: cali034153ba86e: Gained carrier Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.171 [INFO][4216] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0 calico-kube-controllers-c685fc75- calico-system 6031707c-fbd2-45fb-819f-7634d8a3b502 892 0 2025-01-13 21:36:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c685fc75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c685fc75-cgwpm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali034153ba86e [] []}} ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.171 [INFO][4216] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.198 [INFO][4230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" HandleID="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.209 [INFO][4230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" HandleID="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004015c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c685fc75-cgwpm", "timestamp":"2025-01-13 21:37:00.19833969 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.209 [INFO][4230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.209 [INFO][4230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.209 [INFO][4230] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.212 [INFO][4230] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.223 [INFO][4230] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.227 [INFO][4230] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.229 [INFO][4230] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.231 [INFO][4230] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.231 [INFO][4230] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.232 [INFO][4230] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.235 [INFO][4230] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.241 [INFO][4230] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.241 [INFO][4230] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" host="localhost" Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.241 [INFO][4230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:00.263710 containerd[1434]: 2025-01-13 21:37:00.241 [INFO][4230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" HandleID="k8s-pod-network.1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.264267 containerd[1434]: 2025-01-13 21:37:00.243 [INFO][4216] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0", GenerateName:"calico-kube-controllers-c685fc75-", Namespace:"calico-system", SelfLink:"", UID:"6031707c-fbd2-45fb-819f-7634d8a3b502", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c685fc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c685fc75-cgwpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali034153ba86e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:00.264267 containerd[1434]: 2025-01-13 21:37:00.243 [INFO][4216] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.264267 containerd[1434]: 2025-01-13 21:37:00.245 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali034153ba86e ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.264267 containerd[1434]: 2025-01-13 21:37:00.250 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.264267 containerd[1434]: 2025-01-13 21:37:00.250 [INFO][4216] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0", GenerateName:"calico-kube-controllers-c685fc75-", Namespace:"calico-system", SelfLink:"", UID:"6031707c-fbd2-45fb-819f-7634d8a3b502", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c685fc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec", Pod:"calico-kube-controllers-c685fc75-cgwpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali034153ba86e", MAC:"26:04:14:51:6e:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:00.264267 containerd[1434]: 2025-01-13 21:37:00.260 [INFO][4216] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec" Namespace="calico-system" Pod="calico-kube-controllers-c685fc75-cgwpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:00.283320 containerd[1434]: time="2025-01-13T21:37:00.283199834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:37:00.283320 containerd[1434]: time="2025-01-13T21:37:00.283272363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:37:00.283320 containerd[1434]: time="2025-01-13T21:37:00.283296726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:00.283320 containerd[1434]: time="2025-01-13T21:37:00.283388497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:00.303428 systemd[1]: Started cri-containerd-1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec.scope - libcontainer container 1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec. Jan 13 21:37:00.313837 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:37:00.329908 containerd[1434]: time="2025-01-13T21:37:00.329786196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c685fc75-cgwpm,Uid:6031707c-fbd2-45fb-819f-7634d8a3b502,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec\"" Jan 13 21:37:00.334769 containerd[1434]: time="2025-01-13T21:37:00.334640752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:37:00.928991 containerd[1434]: time="2025-01-13T21:37:00.928665997Z" level=info msg="StopPodSandbox for \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\"" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.971 [INFO][4310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.971 [INFO][4310] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" iface="eth0" netns="/var/run/netns/cni-13470dd6-c8c0-b6a3-1df0-c8e6ea58fab7" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.971 [INFO][4310] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" iface="eth0" netns="/var/run/netns/cni-13470dd6-c8c0-b6a3-1df0-c8e6ea58fab7" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.971 [INFO][4310] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" iface="eth0" netns="/var/run/netns/cni-13470dd6-c8c0-b6a3-1df0-c8e6ea58fab7" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.971 [INFO][4310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.971 [INFO][4310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.991 [INFO][4317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.992 [INFO][4317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:00.992 [INFO][4317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:01.000 [WARNING][4317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:01.000 [INFO][4317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:01.002 [INFO][4317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:01.005786 containerd[1434]: 2025-01-13 21:37:01.004 [INFO][4310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:01.006594 containerd[1434]: time="2025-01-13T21:37:01.005924360Z" level=info msg="TearDown network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\" successfully" Jan 13 21:37:01.006594 containerd[1434]: time="2025-01-13T21:37:01.005949403Z" level=info msg="StopPodSandbox for \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\" returns successfully" Jan 13 21:37:01.007180 containerd[1434]: time="2025-01-13T21:37:01.006818869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2zzjl,Uid:52f5badb-75da-429f-ac03-b7fa7b564ae8,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:37:01.117267 systemd-networkd[1368]: cali6cb0e78c8c6: Link UP Jan 13 21:37:01.117845 systemd-networkd[1368]: cali6cb0e78c8c6: Gained carrier Jan 13 21:37:01.126093 systemd[1]: run-netns-cni\x2d13470dd6\x2dc8c0\x2db6a3\x2d1df0\x2dc8e6ea58fab7.mount: Deactivated successfully. Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.045 [INFO][4326] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0 calico-apiserver-9789b5cdc- calico-apiserver 52f5badb-75da-429f-ac03-b7fa7b564ae8 901 0 2025-01-13 21:36:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9789b5cdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9789b5cdc-2zzjl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cb0e78c8c6 [] []}} ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.045 [INFO][4326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.073 [INFO][4339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" HandleID="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.087 [INFO][4339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" HandleID="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000364e00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9789b5cdc-2zzjl", "timestamp":"2025-01-13 21:37:01.073735872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.087 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.087 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.087 [INFO][4339] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.091 [INFO][4339] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.094 [INFO][4339] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.098 [INFO][4339] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.100 [INFO][4339] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.102 [INFO][4339] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.102 [INFO][4339] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.103 [INFO][4339] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.106 [INFO][4339] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.112 [INFO][4339] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.112 [INFO][4339] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" host="localhost" Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.112 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:01.132823 containerd[1434]: 2025-01-13 21:37:01.112 [INFO][4339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" HandleID="k8s-pod-network.cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.133644 containerd[1434]: 2025-01-13 21:37:01.114 [INFO][4326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52f5badb-75da-429f-ac03-b7fa7b564ae8", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9789b5cdc-2zzjl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb0e78c8c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:01.133644 containerd[1434]: 2025-01-13 21:37:01.114 [INFO][4326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.133644 containerd[1434]: 2025-01-13 21:37:01.114 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cb0e78c8c6 ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.133644 containerd[1434]: 2025-01-13 21:37:01.116 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.133644 containerd[1434]: 2025-01-13 21:37:01.117 [INFO][4326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52f5badb-75da-429f-ac03-b7fa7b564ae8", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a", Pod:"calico-apiserver-9789b5cdc-2zzjl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb0e78c8c6", MAC:"9e:68:2a:0f:bc:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:01.133644 containerd[1434]: 2025-01-13 21:37:01.131 [INFO][4326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2zzjl" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:01.150312 containerd[1434]: time="2025-01-13T21:37:01.150194034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:37:01.150312 containerd[1434]: time="2025-01-13T21:37:01.150281204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:37:01.150312 containerd[1434]: time="2025-01-13T21:37:01.150292926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:01.150519 containerd[1434]: time="2025-01-13T21:37:01.150366575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:01.168409 systemd[1]: Started cri-containerd-cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a.scope - libcontainer container cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a. Jan 13 21:37:01.179174 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:37:01.202595 containerd[1434]: time="2025-01-13T21:37:01.202551990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2zzjl,Uid:52f5badb-75da-429f-ac03-b7fa7b564ae8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a\"" Jan 13 21:37:01.741777 containerd[1434]: time="2025-01-13T21:37:01.741680158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:01.742250 containerd[1434]: time="2025-01-13T21:37:01.742183299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 13 21:37:01.742956 containerd[1434]: time="2025-01-13T21:37:01.742920389Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:01.744790 containerd[1434]: time="2025-01-13T21:37:01.744748491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:01.745410 containerd[1434]: time="2025-01-13T21:37:01.745374447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.410604798s" Jan 13 21:37:01.745458 containerd[1434]: time="2025-01-13T21:37:01.745412891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 13 21:37:01.747468 containerd[1434]: time="2025-01-13T21:37:01.747198788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:37:01.758746 containerd[1434]: time="2025-01-13T21:37:01.755989175Z" level=info msg="CreateContainer within sandbox \"1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:37:01.772935 containerd[1434]: time="2025-01-13T21:37:01.772856943Z" level=info msg="CreateContainer within sandbox \"1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"df8683f9d7445bdd6e27c237f6f4b7bfd047b505a73ba8f974fd033363c91b17\"" Jan 13 21:37:01.773534 containerd[1434]: time="2025-01-13T21:37:01.773497661Z" level=info msg="StartContainer for \"df8683f9d7445bdd6e27c237f6f4b7bfd047b505a73ba8f974fd033363c91b17\"" Jan 13 21:37:01.810415 systemd[1]: Started cri-containerd-df8683f9d7445bdd6e27c237f6f4b7bfd047b505a73ba8f974fd033363c91b17.scope - libcontainer container df8683f9d7445bdd6e27c237f6f4b7bfd047b505a73ba8f974fd033363c91b17. Jan 13 21:37:01.837471 containerd[1434]: time="2025-01-13T21:37:01.837399338Z" level=info msg="StartContainer for \"df8683f9d7445bdd6e27c237f6f4b7bfd047b505a73ba8f974fd033363c91b17\" returns successfully" Jan 13 21:37:02.136519 kubelet[2550]: I0113 21:37:02.136385 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c685fc75-cgwpm" podStartSLOduration=22.724147455 podStartE2EDuration="24.136367811s" podCreationTimestamp="2025-01-13 21:36:38 +0000 UTC" firstStartedPulling="2025-01-13 21:37:00.334356237 +0000 UTC m=+49.479291761" lastFinishedPulling="2025-01-13 21:37:01.746576633 +0000 UTC m=+50.891512117" observedRunningTime="2025-01-13 21:37:02.091529028 +0000 UTC m=+51.236464512" watchObservedRunningTime="2025-01-13 21:37:02.136367811 +0000 UTC m=+51.281303335" Jan 13 21:37:02.209462 systemd-networkd[1368]: cali6cb0e78c8c6: Gained IPv6LL Jan 13 21:37:02.273417 systemd-networkd[1368]: cali034153ba86e: Gained IPv6LL Jan 13 21:37:02.644031 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:40186.service - OpenSSH per-connection server daemon (10.0.0.1:40186). Jan 13 21:37:02.694430 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 40186 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:02.696086 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:02.699952 systemd-logind[1415]: New session 14 of user core. Jan 13 21:37:02.710410 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:37:02.865162 sshd[4465]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:02.868373 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:40186.service: Deactivated successfully. Jan 13 21:37:02.871745 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:37:02.872395 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:37:02.873174 systemd-logind[1415]: Removed session 14. Jan 13 21:37:02.929396 containerd[1434]: time="2025-01-13T21:37:02.929067056Z" level=info msg="StopPodSandbox for \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\"" Jan 13 21:37:02.929396 containerd[1434]: time="2025-01-13T21:37:02.929113381Z" level=info msg="StopPodSandbox for \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\"" Jan 13 21:37:02.929760 containerd[1434]: time="2025-01-13T21:37:02.929500708Z" level=info msg="StopPodSandbox for \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\"" Jan 13 21:37:02.930198 containerd[1434]: time="2025-01-13T21:37:02.929081577Z" level=info msg="StopPodSandbox for \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\"" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.017 [INFO][4540] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.018 [INFO][4540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" iface="eth0" netns="/var/run/netns/cni-8ef08e62-46b0-df96-7360-5b8e9c78f247" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.018 [INFO][4540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" iface="eth0" netns="/var/run/netns/cni-8ef08e62-46b0-df96-7360-5b8e9c78f247" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.018 [INFO][4540] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" iface="eth0" netns="/var/run/netns/cni-8ef08e62-46b0-df96-7360-5b8e9c78f247" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.018 [INFO][4540] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.018 [INFO][4540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.075 [INFO][4573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.075 [INFO][4573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.075 [INFO][4573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.084 [WARNING][4573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.084 [INFO][4573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.086 [INFO][4573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.097298 containerd[1434]: 2025-01-13 21:37:03.093 [INFO][4540] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:03.100351 containerd[1434]: time="2025-01-13T21:37:03.097972690Z" level=info msg="TearDown network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\" successfully" Jan 13 21:37:03.100351 containerd[1434]: time="2025-01-13T21:37:03.098005694Z" level=info msg="StopPodSandbox for \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\" returns successfully" Jan 13 21:37:03.099922 systemd[1]: run-netns-cni\x2d8ef08e62\x2d46b0\x2ddf96\x2d7360\x2d5b8e9c78f247.mount: Deactivated successfully. Jan 13 21:37:03.103392 kubelet[2550]: E0113 21:37:03.103184 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:03.103820 containerd[1434]: time="2025-01-13T21:37:03.103792261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zvxj4,Uid:56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84,Namespace:kube-system,Attempt:1,}" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.033 [INFO][4536] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.034 [INFO][4536] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" iface="eth0" netns="/var/run/netns/cni-9d77ff39-7cbb-4dbe-8348-8e576aa8c274" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.035 [INFO][4536] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" iface="eth0" netns="/var/run/netns/cni-9d77ff39-7cbb-4dbe-8348-8e576aa8c274" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.035 [INFO][4536] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" iface="eth0" netns="/var/run/netns/cni-9d77ff39-7cbb-4dbe-8348-8e576aa8c274" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.035 [INFO][4536] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.035 [INFO][4536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.080 [INFO][4581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.080 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.087 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.096 [WARNING][4581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.096 [INFO][4581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.100 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.108270 containerd[1434]: 2025-01-13 21:37:03.105 [INFO][4536] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:03.110679 systemd[1]: run-netns-cni\x2d9d77ff39\x2d7cbb\x2d4dbe\x2d8348\x2d8e576aa8c274.mount: Deactivated successfully. Jan 13 21:37:03.111380 containerd[1434]: time="2025-01-13T21:37:03.111346359Z" level=info msg="TearDown network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\" successfully" Jan 13 21:37:03.111380 containerd[1434]: time="2025-01-13T21:37:03.111378242Z" level=info msg="StopPodSandbox for \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\" returns successfully" Jan 13 21:37:03.111891 kubelet[2550]: E0113 21:37:03.111868 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:03.112233 containerd[1434]: time="2025-01-13T21:37:03.112205301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4z986,Uid:7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1,Namespace:kube-system,Attempt:1,}" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.030 [INFO][4549] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.031 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" iface="eth0" netns="/var/run/netns/cni-eaa2f7fa-4267-82d0-6ee8-634a691f79e1" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.031 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" iface="eth0" netns="/var/run/netns/cni-eaa2f7fa-4267-82d0-6ee8-634a691f79e1" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.032 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" iface="eth0" netns="/var/run/netns/cni-eaa2f7fa-4267-82d0-6ee8-634a691f79e1" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.034 [INFO][4549] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.034 [INFO][4549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.089 [INFO][4580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.089 [INFO][4580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.100 [INFO][4580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.109 [WARNING][4580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.109 [INFO][4580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.110 [INFO][4580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.119490 containerd[1434]: 2025-01-13 21:37:03.115 [INFO][4549] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:03.119953 containerd[1434]: time="2025-01-13T21:37:03.119928698Z" level=info msg="TearDown network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\" successfully" Jan 13 21:37:03.120039 containerd[1434]: time="2025-01-13T21:37:03.120023589Z" level=info msg="StopPodSandbox for \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\" returns successfully" Jan 13 21:37:03.121719 systemd[1]: run-netns-cni\x2deaa2f7fa\x2d4267\x2d82d0\x2d6ee8\x2d634a691f79e1.mount: Deactivated successfully. Jan 13 21:37:03.122902 containerd[1434]: time="2025-01-13T21:37:03.122493723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k28zl,Uid:c152f2aa-4163-46d5-8b4d-dd73349b1e5d,Namespace:calico-system,Attempt:1,}" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.036 [INFO][4539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.036 [INFO][4539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" iface="eth0" netns="/var/run/netns/cni-d5892de2-9236-3245-7922-6bceb76c0ca3" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.036 [INFO][4539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" iface="eth0" netns="/var/run/netns/cni-d5892de2-9236-3245-7922-6bceb76c0ca3" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.036 [INFO][4539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" iface="eth0" netns="/var/run/netns/cni-d5892de2-9236-3245-7922-6bceb76c0ca3" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.036 [INFO][4539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.036 [INFO][4539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.096 [INFO][4582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.096 [INFO][4582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.110 [INFO][4582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.124 [WARNING][4582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.124 [INFO][4582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.126 [INFO][4582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.131684 containerd[1434]: 2025-01-13 21:37:03.129 [INFO][4539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:03.132144 containerd[1434]: time="2025-01-13T21:37:03.131852434Z" level=info msg="TearDown network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\" successfully" Jan 13 21:37:03.132144 containerd[1434]: time="2025-01-13T21:37:03.131875837Z" level=info msg="StopPodSandbox for \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\" returns successfully" Jan 13 21:37:03.135456 containerd[1434]: time="2025-01-13T21:37:03.134607242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2mfqn,Uid:df727e07-6bc8-419a-bee3-8ba5a16f82f7,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:37:03.134806 systemd[1]: run-netns-cni\x2dd5892de2\x2d9236\x2d3245\x2d7922\x2d6bceb76c0ca3.mount: Deactivated successfully. Jan 13 21:37:03.318578 systemd-networkd[1368]: calic86e44ddb41: Link UP Jan 13 21:37:03.320146 systemd-networkd[1368]: calic86e44ddb41: Gained carrier Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.180 [INFO][4606] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0 coredns-7db6d8ff4d- kube-system 56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84 924 0 2025-01-13 21:36:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-zvxj4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic86e44ddb41 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.181 [INFO][4606] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.251 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" HandleID="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.267 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" HandleID="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e1c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-zvxj4", "timestamp":"2025-01-13 21:37:03.251422998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.267 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.267 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.267 [INFO][4661] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.269 [INFO][4661] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.274 [INFO][4661] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.285 [INFO][4661] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.287 [INFO][4661] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.289 [INFO][4661] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.289 [INFO][4661] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.291 [INFO][4661] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7 Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.300 [INFO][4661] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.309 [INFO][4661] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.310 [INFO][4661] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" host="localhost" Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.310 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.343020 containerd[1434]: 2025-01-13 21:37:03.310 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" HandleID="k8s-pod-network.48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.343803 containerd[1434]: 2025-01-13 21:37:03.314 [INFO][4606] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-zvxj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86e44ddb41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.343803 containerd[1434]: 2025-01-13 21:37:03.314 [INFO][4606] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.343803 containerd[1434]: 2025-01-13 21:37:03.314 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic86e44ddb41 ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.343803 containerd[1434]: 2025-01-13 21:37:03.319 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.343803 containerd[1434]: 2025-01-13 21:37:03.320 [INFO][4606] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7", Pod:"coredns-7db6d8ff4d-zvxj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86e44ddb41", MAC:"9a:ba:42:30:98:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.343803 containerd[1434]: 2025-01-13 21:37:03.334 [INFO][4606] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zvxj4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:03.365538 systemd-networkd[1368]: calid4b1cc66193: Link UP Jan 13 21:37:03.365725 systemd-networkd[1368]: calid4b1cc66193: Gained carrier Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.204 [INFO][4617] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--4z986-eth0 coredns-7db6d8ff4d- kube-system 7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1 926 0 2025-01-13 21:36:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-4z986 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4b1cc66193 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.204 [INFO][4617] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.255 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" HandleID="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.278 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" HandleID="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003047e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-4z986", "timestamp":"2025-01-13 21:37:03.255637939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.279 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.310 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.310 [INFO][4667] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.314 [INFO][4667] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.323 [INFO][4667] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.328 [INFO][4667] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.332 [INFO][4667] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.337 [INFO][4667] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.337 [INFO][4667] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.341 [INFO][4667] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85 Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.348 [INFO][4667] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.355 [INFO][4667] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.355 [INFO][4667] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" host="localhost" Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.355 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.382907 containerd[1434]: 2025-01-13 21:37:03.355 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" HandleID="k8s-pod-network.f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.383952 containerd[1434]: 2025-01-13 21:37:03.360 [INFO][4617] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4z986-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-4z986", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4b1cc66193", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.383952 containerd[1434]: 2025-01-13 21:37:03.361 [INFO][4617] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.383952 containerd[1434]: 2025-01-13 21:37:03.361 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4b1cc66193 ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.383952 containerd[1434]: 2025-01-13 21:37:03.367 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.383952 containerd[1434]: 2025-01-13 21:37:03.367 [INFO][4617] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4z986-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85", Pod:"coredns-7db6d8ff4d-4z986", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4b1cc66193", MAC:"ba:52:b4:a4:36:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.383952 containerd[1434]: 2025-01-13 21:37:03.377 [INFO][4617] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4z986" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:03.399017 containerd[1434]: time="2025-01-13T21:37:03.398670410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:37:03.399017 containerd[1434]: time="2025-01-13T21:37:03.398743899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:37:03.399017 containerd[1434]: time="2025-01-13T21:37:03.398789944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.400116 containerd[1434]: time="2025-01-13T21:37:03.399958363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.405401 systemd-networkd[1368]: calid99404d51e3: Link UP Jan 13 21:37:03.405981 systemd-networkd[1368]: calid99404d51e3: Gained carrier Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.224 [INFO][4642] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k28zl-eth0 csi-node-driver- calico-system c152f2aa-4163-46d5-8b4d-dd73349b1e5d 925 0 2025-01-13 21:36:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k28zl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid99404d51e3 [] []}} ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.224 [INFO][4642] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.289 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" HandleID="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.303 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" HandleID="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000132020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k28zl", "timestamp":"2025-01-13 21:37:03.289663821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.303 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.355 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.355 [INFO][4675] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.359 [INFO][4675] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.363 [INFO][4675] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.371 [INFO][4675] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.374 [INFO][4675] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.378 [INFO][4675] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.378 [INFO][4675] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.381 [INFO][4675] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.387 [INFO][4675] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.395 [INFO][4675] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.395 [INFO][4675] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" host="localhost" Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.395 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.432945 containerd[1434]: 2025-01-13 21:37:03.395 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" HandleID="k8s-pod-network.7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.434969 containerd[1434]: 2025-01-13 21:37:03.401 [INFO][4642] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k28zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c152f2aa-4163-46d5-8b4d-dd73349b1e5d", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k28zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid99404d51e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.434969 containerd[1434]: 2025-01-13 21:37:03.401 [INFO][4642] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.434969 containerd[1434]: 2025-01-13 21:37:03.401 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid99404d51e3 ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.434969 containerd[1434]: 2025-01-13 21:37:03.406 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.434969 containerd[1434]: 2025-01-13 21:37:03.408 [INFO][4642] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k28zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c152f2aa-4163-46d5-8b4d-dd73349b1e5d", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c", Pod:"csi-node-driver-k28zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid99404d51e3", MAC:"2e:cc:fe:5d:a5:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.434969 containerd[1434]: 2025-01-13 21:37:03.420 [INFO][4642] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c" Namespace="calico-system" Pod="csi-node-driver-k28zl" WorkloadEndpoint="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:03.439419 containerd[1434]: time="2025-01-13T21:37:03.438871425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:37:03.439419 containerd[1434]: time="2025-01-13T21:37:03.439365444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:37:03.439878 containerd[1434]: time="2025-01-13T21:37:03.439834660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.441538 containerd[1434]: time="2025-01-13T21:37:03.440195263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.444435 systemd[1]: Started cri-containerd-48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7.scope - libcontainer container 48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7. Jan 13 21:37:03.471120 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:37:03.475369 systemd-networkd[1368]: cali7443d196830: Link UP Jan 13 21:37:03.475429 systemd[1]: Started cri-containerd-f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85.scope - libcontainer container f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85. Jan 13 21:37:03.476296 systemd-networkd[1368]: cali7443d196830: Gained carrier Jan 13 21:37:03.496550 containerd[1434]: time="2025-01-13T21:37:03.496112505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:37:03.496550 containerd[1434]: time="2025-01-13T21:37:03.496165711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:37:03.496550 containerd[1434]: time="2025-01-13T21:37:03.496176793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.496550 containerd[1434]: time="2025-01-13T21:37:03.496414541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.243 [INFO][4632] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0 calico-apiserver-9789b5cdc- calico-apiserver df727e07-6bc8-419a-bee3-8ba5a16f82f7 927 0 2025-01-13 21:36:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9789b5cdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9789b5cdc-2mfqn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7443d196830 [] []}} ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.243 [INFO][4632] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.306 [INFO][4687] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" HandleID="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.325 [INFO][4687] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" HandleID="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9789b5cdc-2mfqn", "timestamp":"2025-01-13 21:37:03.306224508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.325 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.396 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.396 [INFO][4687] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.406 [INFO][4687] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.413 [INFO][4687] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.429 [INFO][4687] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.432 [INFO][4687] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.436 [INFO][4687] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.436 [INFO][4687] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.445 [INFO][4687] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.451 [INFO][4687] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.463 [INFO][4687] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.464 [INFO][4687] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" host="localhost" Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.464 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:03.501630 containerd[1434]: 2025-01-13 21:37:03.464 [INFO][4687] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" HandleID="k8s-pod-network.7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.502542 containerd[1434]: 2025-01-13 21:37:03.472 [INFO][4632] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"df727e07-6bc8-419a-bee3-8ba5a16f82f7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9789b5cdc-2mfqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7443d196830", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.502542 containerd[1434]: 2025-01-13 21:37:03.472 [INFO][4632] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.502542 containerd[1434]: 2025-01-13 21:37:03.472 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7443d196830 ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.502542 containerd[1434]: 2025-01-13 21:37:03.475 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.502542 containerd[1434]: 2025-01-13 21:37:03.477 [INFO][4632] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"df727e07-6bc8-419a-bee3-8ba5a16f82f7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d", Pod:"calico-apiserver-9789b5cdc-2mfqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7443d196830", MAC:"0e:e3:61:d2:ab:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:03.502542 containerd[1434]: 2025-01-13 21:37:03.486 [INFO][4632] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d" Namespace="calico-apiserver" Pod="calico-apiserver-9789b5cdc-2mfqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:03.504130 containerd[1434]: time="2025-01-13T21:37:03.504091573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zvxj4,Uid:56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84,Namespace:kube-system,Attempt:1,} returns sandbox id \"48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7\"" Jan 13 21:37:03.505762 kubelet[2550]: E0113 21:37:03.505677 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:03.510670 containerd[1434]: time="2025-01-13T21:37:03.510589345Z" level=info msg="CreateContainer within sandbox \"48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:37:03.520315 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:37:03.525422 systemd[1]: Started cri-containerd-7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c.scope - libcontainer container 7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c. Jan 13 21:37:03.539385 containerd[1434]: time="2025-01-13T21:37:03.538958515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:37:03.539849 containerd[1434]: time="2025-01-13T21:37:03.539763370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:37:03.539849 containerd[1434]: time="2025-01-13T21:37:03.539791414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.540671 containerd[1434]: time="2025-01-13T21:37:03.539886745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:37:03.542291 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:37:03.545643 containerd[1434]: time="2025-01-13T21:37:03.545528735Z" level=info msg="CreateContainer within sandbox \"48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c845e37126a718be2b4198f7a34eaa09d9b0f8383436d55bbc3aed45bde8fc4e\"" Jan 13 21:37:03.547949 containerd[1434]: time="2025-01-13T21:37:03.547563617Z" level=info msg="StartContainer for \"c845e37126a718be2b4198f7a34eaa09d9b0f8383436d55bbc3aed45bde8fc4e\"" Jan 13 21:37:03.553606 containerd[1434]: time="2025-01-13T21:37:03.553436795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4z986,Uid:7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1,Namespace:kube-system,Attempt:1,} returns sandbox id \"f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85\"" Jan 13 21:37:03.554669 kubelet[2550]: E0113 21:37:03.554646 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:03.563134 containerd[1434]: time="2025-01-13T21:37:03.562614725Z" level=info msg="CreateContainer within sandbox \"f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:37:03.563388 systemd[1]: Started cri-containerd-7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d.scope - libcontainer container 7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d. Jan 13 21:37:03.574671 containerd[1434]: time="2025-01-13T21:37:03.574570825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k28zl,Uid:c152f2aa-4163-46d5-8b4d-dd73349b1e5d,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c\"" Jan 13 21:37:03.580219 containerd[1434]: time="2025-01-13T21:37:03.580166050Z" level=info msg="CreateContainer within sandbox \"f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30d4ea0e32c9dc4bcf03ea001c4b57156b9e16a5b1631d2a23f3f383558f0538\"" Jan 13 21:37:03.581399 systemd[1]: Started cri-containerd-c845e37126a718be2b4198f7a34eaa09d9b0f8383436d55bbc3aed45bde8fc4e.scope - libcontainer container c845e37126a718be2b4198f7a34eaa09d9b0f8383436d55bbc3aed45bde8fc4e. Jan 13 21:37:03.583161 containerd[1434]: time="2025-01-13T21:37:03.582166847Z" level=info msg="StartContainer for \"30d4ea0e32c9dc4bcf03ea001c4b57156b9e16a5b1631d2a23f3f383558f0538\"" Jan 13 21:37:03.590736 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:37:03.620609 containerd[1434]: time="2025-01-13T21:37:03.620558848Z" level=info msg="StartContainer for \"c845e37126a718be2b4198f7a34eaa09d9b0f8383436d55bbc3aed45bde8fc4e\" returns successfully" Jan 13 21:37:03.622376 systemd[1]: Started cri-containerd-30d4ea0e32c9dc4bcf03ea001c4b57156b9e16a5b1631d2a23f3f383558f0538.scope - libcontainer container 30d4ea0e32c9dc4bcf03ea001c4b57156b9e16a5b1631d2a23f3f383558f0538. Jan 13 21:37:03.627836 containerd[1434]: time="2025-01-13T21:37:03.626882839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9789b5cdc-2mfqn,Uid:df727e07-6bc8-419a-bee3-8ba5a16f82f7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d\"" Jan 13 21:37:03.667037 containerd[1434]: time="2025-01-13T21:37:03.666957560Z" level=info msg="StartContainer for \"30d4ea0e32c9dc4bcf03ea001c4b57156b9e16a5b1631d2a23f3f383558f0538\" returns successfully" Jan 13 21:37:04.021760 containerd[1434]: time="2025-01-13T21:37:04.021708835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:04.022306 containerd[1434]: time="2025-01-13T21:37:04.022241298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 13 21:37:04.023036 containerd[1434]: time="2025-01-13T21:37:04.023011989Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:04.025278 containerd[1434]: time="2025-01-13T21:37:04.025164762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:04.025915 containerd[1434]: time="2025-01-13T21:37:04.025882126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.278576845s" Jan 13 21:37:04.025961 containerd[1434]: time="2025-01-13T21:37:04.025917130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 21:37:04.027538 containerd[1434]: time="2025-01-13T21:37:04.027427628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:37:04.028083 containerd[1434]: time="2025-01-13T21:37:04.028038940Z" level=info msg="CreateContainer within sandbox \"cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:37:04.037661 containerd[1434]: time="2025-01-13T21:37:04.037612826Z" level=info msg="CreateContainer within sandbox \"cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09dfe31ceb734d026b46f26cdb01e2729d32deca9125afea8114e3c8253f87d6\"" Jan 13 21:37:04.038124 containerd[1434]: time="2025-01-13T21:37:04.038067119Z" level=info msg="StartContainer for \"09dfe31ceb734d026b46f26cdb01e2729d32deca9125afea8114e3c8253f87d6\"" Jan 13 21:37:04.063418 systemd[1]: Started cri-containerd-09dfe31ceb734d026b46f26cdb01e2729d32deca9125afea8114e3c8253f87d6.scope - libcontainer container 09dfe31ceb734d026b46f26cdb01e2729d32deca9125afea8114e3c8253f87d6. Jan 13 21:37:04.092944 containerd[1434]: time="2025-01-13T21:37:04.092875765Z" level=info msg="StartContainer for \"09dfe31ceb734d026b46f26cdb01e2729d32deca9125afea8114e3c8253f87d6\" returns successfully" Jan 13 21:37:04.097071 kubelet[2550]: E0113 21:37:04.097044 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:04.111809 kubelet[2550]: E0113 21:37:04.111511 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:04.138336 kubelet[2550]: I0113 21:37:04.137775 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zvxj4" podStartSLOduration=38.137757204 podStartE2EDuration="38.137757204s" podCreationTimestamp="2025-01-13 21:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:37:04.137538818 +0000 UTC m=+53.282474542" watchObservedRunningTime="2025-01-13 21:37:04.137757204 +0000 UTC m=+53.282692728" Jan 13 21:37:04.139228 kubelet[2550]: I0113 21:37:04.138937 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4z986" podStartSLOduration=38.138925701 podStartE2EDuration="38.138925701s" podCreationTimestamp="2025-01-13 21:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:37:04.125658941 +0000 UTC m=+53.270594465" watchObservedRunningTime="2025-01-13 21:37:04.138925701 +0000 UTC m=+53.283861185" Jan 13 21:37:04.513393 systemd-networkd[1368]: cali7443d196830: Gained IPv6LL Jan 13 21:37:05.026461 systemd-networkd[1368]: calic86e44ddb41: Gained IPv6LL Jan 13 21:37:05.117569 kubelet[2550]: E0113 21:37:05.117170 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:05.117569 kubelet[2550]: E0113 21:37:05.117395 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:05.145014 kubelet[2550]: I0113 21:37:05.144523 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9789b5cdc-2zzjl" podStartSLOduration=25.321494152 podStartE2EDuration="28.144504526s" podCreationTimestamp="2025-01-13 21:36:37 +0000 UTC" firstStartedPulling="2025-01-13 21:37:01.203699049 +0000 UTC m=+50.348634573" lastFinishedPulling="2025-01-13 21:37:04.026709423 +0000 UTC m=+53.171644947" observedRunningTime="2025-01-13 21:37:05.133363428 +0000 UTC m=+54.278298912" watchObservedRunningTime="2025-01-13 21:37:05.144504526 +0000 UTC m=+54.289440050" Jan 13 21:37:05.281388 systemd-networkd[1368]: calid4b1cc66193: Gained IPv6LL Jan 13 21:37:05.314293 containerd[1434]: time="2025-01-13T21:37:05.314096644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:05.315291 containerd[1434]: time="2025-01-13T21:37:05.315258099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 21:37:05.317522 containerd[1434]: time="2025-01-13T21:37:05.316218851Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:05.318675 containerd[1434]: time="2025-01-13T21:37:05.318622691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:05.319405 containerd[1434]: time="2025-01-13T21:37:05.319353976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.291890864s" Jan 13 21:37:05.319503 containerd[1434]: time="2025-01-13T21:37:05.319486072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 21:37:05.321023 containerd[1434]: time="2025-01-13T21:37:05.320995127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:37:05.323370 containerd[1434]: time="2025-01-13T21:37:05.323314078Z" level=info msg="CreateContainer within sandbox \"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:37:05.344666 containerd[1434]: time="2025-01-13T21:37:05.344618680Z" level=info msg="CreateContainer within sandbox \"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e1e98dcf7cf713ec9ddfa81bd9e4be864801fd6c8b65eb00f9e662e75ee2162d\"" Jan 13 21:37:05.345229 containerd[1434]: time="2025-01-13T21:37:05.345206628Z" level=info msg="StartContainer for \"e1e98dcf7cf713ec9ddfa81bd9e4be864801fd6c8b65eb00f9e662e75ee2162d\"" Jan 13 21:37:05.346317 systemd-networkd[1368]: calid99404d51e3: Gained IPv6LL Jan 13 21:37:05.380417 systemd[1]: Started cri-containerd-e1e98dcf7cf713ec9ddfa81bd9e4be864801fd6c8b65eb00f9e662e75ee2162d.scope - libcontainer container e1e98dcf7cf713ec9ddfa81bd9e4be864801fd6c8b65eb00f9e662e75ee2162d. Jan 13 21:37:05.432881 containerd[1434]: time="2025-01-13T21:37:05.432765509Z" level=info msg="StartContainer for \"e1e98dcf7cf713ec9ddfa81bd9e4be864801fd6c8b65eb00f9e662e75ee2162d\" returns successfully" Jan 13 21:37:05.678293 containerd[1434]: time="2025-01-13T21:37:05.678249547Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:05.678881 containerd[1434]: time="2025-01-13T21:37:05.678725563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:37:05.680766 containerd[1434]: time="2025-01-13T21:37:05.680727276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 359.699785ms" Jan 13 21:37:05.680815 containerd[1434]: time="2025-01-13T21:37:05.680766681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 21:37:05.681601 containerd[1434]: time="2025-01-13T21:37:05.681580015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:37:05.683271 containerd[1434]: time="2025-01-13T21:37:05.682936894Z" level=info msg="CreateContainer within sandbox \"7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:37:05.693842 containerd[1434]: time="2025-01-13T21:37:05.693797079Z" level=info msg="CreateContainer within sandbox \"7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"43bb459bc36f19f1999400c53504fe2e5c8262296840cbed35dd62ba82c6490e\"" Jan 13 21:37:05.694243 containerd[1434]: time="2025-01-13T21:37:05.694217368Z" level=info msg="StartContainer for \"43bb459bc36f19f1999400c53504fe2e5c8262296840cbed35dd62ba82c6490e\"" Jan 13 21:37:05.717414 systemd[1]: Started cri-containerd-43bb459bc36f19f1999400c53504fe2e5c8262296840cbed35dd62ba82c6490e.scope - libcontainer container 43bb459bc36f19f1999400c53504fe2e5c8262296840cbed35dd62ba82c6490e. Jan 13 21:37:05.745049 containerd[1434]: time="2025-01-13T21:37:05.744920715Z" level=info msg="StartContainer for \"43bb459bc36f19f1999400c53504fe2e5c8262296840cbed35dd62ba82c6490e\" returns successfully" Jan 13 21:37:06.120860 kubelet[2550]: I0113 21:37:06.120745 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:37:06.121933 kubelet[2550]: E0113 21:37:06.121226 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:06.121933 kubelet[2550]: E0113 21:37:06.121631 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:06.133916 kubelet[2550]: I0113 21:37:06.133665 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9789b5cdc-2mfqn" podStartSLOduration=27.082150187 podStartE2EDuration="29.133648064s" podCreationTimestamp="2025-01-13 21:36:37 +0000 UTC" firstStartedPulling="2025-01-13 21:37:03.629966325 +0000 UTC m=+52.774901809" lastFinishedPulling="2025-01-13 21:37:05.681464082 +0000 UTC m=+54.826399686" observedRunningTime="2025-01-13 21:37:06.13249261 +0000 UTC m=+55.277428094" watchObservedRunningTime="2025-01-13 21:37:06.133648064 +0000 UTC m=+55.278583588" Jan 13 21:37:07.100989 containerd[1434]: time="2025-01-13T21:37:07.100939091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:07.101895 containerd[1434]: time="2025-01-13T21:37:07.101692817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 21:37:07.103373 containerd[1434]: time="2025-01-13T21:37:07.102546115Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:07.107260 containerd[1434]: time="2025-01-13T21:37:07.106133565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:37:07.107260 containerd[1434]: time="2025-01-13T21:37:07.107078793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.425467894s" Jan 13 21:37:07.107260 containerd[1434]: time="2025-01-13T21:37:07.107110117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 21:37:07.110050 containerd[1434]: time="2025-01-13T21:37:07.109991647Z" level=info msg="CreateContainer within sandbox \"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:37:07.121609 containerd[1434]: time="2025-01-13T21:37:07.121489643Z" level=info msg="CreateContainer within sandbox \"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a1e80d968cadadf4493fc8d78e90e784049eb2d181a1ba4885ef9686b9bf685d\"" Jan 13 21:37:07.122440 containerd[1434]: time="2025-01-13T21:37:07.122413509Z" level=info msg="StartContainer for \"a1e80d968cadadf4493fc8d78e90e784049eb2d181a1ba4885ef9686b9bf685d\"" Jan 13 21:37:07.125789 kubelet[2550]: I0113 21:37:07.125736 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:37:07.126311 kubelet[2550]: E0113 21:37:07.126290 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:07.126922 kubelet[2550]: E0113 21:37:07.126878 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:07.171436 systemd[1]: Started cri-containerd-a1e80d968cadadf4493fc8d78e90e784049eb2d181a1ba4885ef9686b9bf685d.scope - libcontainer container a1e80d968cadadf4493fc8d78e90e784049eb2d181a1ba4885ef9686b9bf685d. Jan 13 21:37:07.198502 containerd[1434]: time="2025-01-13T21:37:07.198461256Z" level=info msg="StartContainer for \"a1e80d968cadadf4493fc8d78e90e784049eb2d181a1ba4885ef9686b9bf685d\" returns successfully" Jan 13 21:37:07.877356 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:40198.service - OpenSSH per-connection server daemon (10.0.0.1:40198). Jan 13 21:37:07.926325 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 40198 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:07.928124 sshd[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:07.934469 systemd-logind[1415]: New session 15 of user core. Jan 13 21:37:07.940452 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:37:08.039313 kubelet[2550]: I0113 21:37:08.039271 2550 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:37:08.040883 kubelet[2550]: I0113 21:37:08.040842 2550 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:37:08.144332 kubelet[2550]: I0113 21:37:08.144036 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k28zl" podStartSLOduration=26.617140086 podStartE2EDuration="30.144018819s" podCreationTimestamp="2025-01-13 21:36:38 +0000 UTC" firstStartedPulling="2025-01-13 21:37:03.581709593 +0000 UTC m=+52.726645117" lastFinishedPulling="2025-01-13 21:37:07.108588326 +0000 UTC m=+56.253523850" observedRunningTime="2025-01-13 21:37:08.143648937 +0000 UTC m=+57.288584461" watchObservedRunningTime="2025-01-13 21:37:08.144018819 +0000 UTC m=+57.288954303" Jan 13 21:37:08.191199 sshd[5178]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:08.196502 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:40198.service: Deactivated successfully. Jan 13 21:37:08.199437 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:37:08.200036 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:37:08.201107 systemd-logind[1415]: Removed session 15. Jan 13 21:37:08.692422 kubelet[2550]: E0113 21:37:08.692269 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:10.931428 containerd[1434]: time="2025-01-13T21:37:10.931384976Z" level=info msg="StopPodSandbox for \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\"" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.967 [WARNING][5230] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7", Pod:"coredns-7db6d8ff4d-zvxj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86e44ddb41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.968 [INFO][5230] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.968 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" iface="eth0" netns="" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.968 [INFO][5230] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.968 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.989 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.989 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.989 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.999 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:10.999 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:11.000 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.004308 containerd[1434]: 2025-01-13 21:37:11.002 [INFO][5230] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.004768 containerd[1434]: time="2025-01-13T21:37:11.004350313Z" level=info msg="TearDown network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\" successfully" Jan 13 21:37:11.004768 containerd[1434]: time="2025-01-13T21:37:11.004375271Z" level=info msg="StopPodSandbox for \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\" returns successfully" Jan 13 21:37:11.004909 containerd[1434]: time="2025-01-13T21:37:11.004882704Z" level=info msg="RemovePodSandbox for \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\"" Jan 13 21:37:11.008906 containerd[1434]: time="2025-01-13T21:37:11.008860380Z" level=info msg="Forcibly stopping sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\"" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.041 [WARNING][5262] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"56c848b9-0e6b-4ed8-a9ca-fc40c4a3cd84", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48529d5af16841c073fea6714a6f0df236ba3e1206f970657c6d52cc529629e7", Pod:"coredns-7db6d8ff4d-zvxj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86e44ddb41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.041 [INFO][5262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.041 [INFO][5262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" iface="eth0" netns="" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.041 [INFO][5262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.041 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.060 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.060 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.061 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.070 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.070 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" HandleID="k8s-pod-network.6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Workload="localhost-k8s-coredns--7db6d8ff4d--zvxj4-eth0" Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.072 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.076793 containerd[1434]: 2025-01-13 21:37:11.075 [INFO][5262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783" Jan 13 21:37:11.076793 containerd[1434]: time="2025-01-13T21:37:11.076622816Z" level=info msg="TearDown network for sandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\" successfully" Jan 13 21:37:11.140534 containerd[1434]: time="2025-01-13T21:37:11.140467091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:37:11.140674 containerd[1434]: time="2025-01-13T21:37:11.140576081Z" level=info msg="RemovePodSandbox \"6e48158b40134072b238f0bf3f1e1a3906f5a11e86772f07c00e88e56bebe783\" returns successfully" Jan 13 21:37:11.141183 containerd[1434]: time="2025-01-13T21:37:11.141146709Z" level=info msg="StopPodSandbox for \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\"" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.174 [WARNING][5298] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"df727e07-6bc8-419a-bee3-8ba5a16f82f7", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d", Pod:"calico-apiserver-9789b5cdc-2mfqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7443d196830", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.174 [INFO][5298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.174 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" iface="eth0" netns="" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.174 [INFO][5298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.174 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.194 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.194 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.194 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.202 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.202 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.203 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.207069 containerd[1434]: 2025-01-13 21:37:11.205 [INFO][5298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.207933 containerd[1434]: time="2025-01-13T21:37:11.207803806Z" level=info msg="TearDown network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\" successfully" Jan 13 21:37:11.207933 containerd[1434]: time="2025-01-13T21:37:11.207834243Z" level=info msg="StopPodSandbox for \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\" returns successfully" Jan 13 21:37:11.208626 containerd[1434]: time="2025-01-13T21:37:11.208487104Z" level=info msg="RemovePodSandbox for \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\"" Jan 13 21:37:11.208944 containerd[1434]: time="2025-01-13T21:37:11.208775277Z" level=info msg="Forcibly stopping sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\"" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.243 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"df727e07-6bc8-419a-bee3-8ba5a16f82f7", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e0079eaae4ea5ccca2cbf793e0f8d75bfe6ba6f93bfd6298805da0508ad274d", Pod:"calico-apiserver-9789b5cdc-2mfqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7443d196830", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.243 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.243 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" iface="eth0" netns="" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.243 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.243 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.264 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.264 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.264 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.272 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.272 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" HandleID="k8s-pod-network.891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2mfqn-eth0" Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.274 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.278001 containerd[1434]: 2025-01-13 21:37:11.275 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9" Jan 13 21:37:11.278449 containerd[1434]: time="2025-01-13T21:37:11.278052775Z" level=info msg="TearDown network for sandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\" successfully" Jan 13 21:37:11.297426 containerd[1434]: time="2025-01-13T21:37:11.297372366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:37:11.297547 containerd[1434]: time="2025-01-13T21:37:11.297453958Z" level=info msg="RemovePodSandbox \"891c15be35625d37d29b41e22e121b44190a2a5e74c5c2fb08bbc46c1a3be0a9\" returns successfully" Jan 13 21:37:11.298227 containerd[1434]: time="2025-01-13T21:37:11.297937074Z" level=info msg="StopPodSandbox for \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\"" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.330 [WARNING][5356] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k28zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c152f2aa-4163-46d5-8b4d-dd73349b1e5d", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c", Pod:"csi-node-driver-k28zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid99404d51e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.330 [INFO][5356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.330 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" iface="eth0" netns="" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.330 [INFO][5356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.330 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.349 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.349 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.349 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.357 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.357 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.359 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.362834 containerd[1434]: 2025-01-13 21:37:11.361 [INFO][5356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.363351 containerd[1434]: time="2025-01-13T21:37:11.363319888Z" level=info msg="TearDown network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\" successfully" Jan 13 21:37:11.363414 containerd[1434]: time="2025-01-13T21:37:11.363400201Z" level=info msg="StopPodSandbox for \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\" returns successfully" Jan 13 21:37:11.363915 containerd[1434]: time="2025-01-13T21:37:11.363884436Z" level=info msg="RemovePodSandbox for \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\"" Jan 13 21:37:11.363985 containerd[1434]: time="2025-01-13T21:37:11.363919713Z" level=info msg="Forcibly stopping sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\"" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.400 [WARNING][5386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k28zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c152f2aa-4163-46d5-8b4d-dd73349b1e5d", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e63ed154a38a73ba4142a9a53b9a4e6839aaebb3a62b3ad80b184137318762c", Pod:"csi-node-driver-k28zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid99404d51e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.400 [INFO][5386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.400 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" iface="eth0" netns="" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.400 [INFO][5386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.400 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.421 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.421 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.421 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.429 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.429 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" HandleID="k8s-pod-network.2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Workload="localhost-k8s-csi--node--driver--k28zl-eth0" Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.431 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.435280 containerd[1434]: 2025-01-13 21:37:11.432 [INFO][5386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b" Jan 13 21:37:11.435280 containerd[1434]: time="2025-01-13T21:37:11.434545287Z" level=info msg="TearDown network for sandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\" successfully" Jan 13 21:37:11.437379 containerd[1434]: time="2025-01-13T21:37:11.437327712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:37:11.437450 containerd[1434]: time="2025-01-13T21:37:11.437413784Z" level=info msg="RemovePodSandbox \"2b7b396043dd8ca5c9f95994e8fc2e8cbf830bbad8339a736de031c67bc9173b\" returns successfully" Jan 13 21:37:11.437864 containerd[1434]: time="2025-01-13T21:37:11.437823227Z" level=info msg="StopPodSandbox for \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\"" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.473 [WARNING][5415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52f5badb-75da-429f-ac03-b7fa7b564ae8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a", Pod:"calico-apiserver-9789b5cdc-2zzjl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb0e78c8c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.474 [INFO][5415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.474 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" iface="eth0" netns="" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.474 [INFO][5415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.474 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.493 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.494 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.494 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.502 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.502 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.503 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.507560 containerd[1434]: 2025-01-13 21:37:11.505 [INFO][5415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.507560 containerd[1434]: time="2025-01-13T21:37:11.507537404Z" level=info msg="TearDown network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\" successfully" Jan 13 21:37:11.507560 containerd[1434]: time="2025-01-13T21:37:11.507562802Z" level=info msg="StopPodSandbox for \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\" returns successfully" Jan 13 21:37:11.508047 containerd[1434]: time="2025-01-13T21:37:11.508000882Z" level=info msg="RemovePodSandbox for \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\"" Jan 13 21:37:11.508047 containerd[1434]: time="2025-01-13T21:37:11.508036119Z" level=info msg="Forcibly stopping sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\"" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.552 [WARNING][5445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0", GenerateName:"calico-apiserver-9789b5cdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52f5badb-75da-429f-ac03-b7fa7b564ae8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9789b5cdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc9a12a1c8485ea35d89069da7792a92a0cd88e481ba45c92ee85e2587c9713a", Pod:"calico-apiserver-9789b5cdc-2zzjl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb0e78c8c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.552 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.552 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" iface="eth0" netns="" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.552 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.552 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.577 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.578 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.578 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.587 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.587 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" HandleID="k8s-pod-network.5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Workload="localhost-k8s-calico--apiserver--9789b5cdc--2zzjl-eth0" Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.589 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.594991 containerd[1434]: 2025-01-13 21:37:11.591 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21" Jan 13 21:37:11.595643 containerd[1434]: time="2025-01-13T21:37:11.595034914Z" level=info msg="TearDown network for sandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\" successfully" Jan 13 21:37:11.598592 containerd[1434]: time="2025-01-13T21:37:11.598545952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:37:11.598690 containerd[1434]: time="2025-01-13T21:37:11.598619625Z" level=info msg="RemovePodSandbox \"5672a5752862319a4945e161482a36ec7826448549e07c8b1d54aa2c208d7c21\" returns successfully" Jan 13 21:37:11.599275 containerd[1434]: time="2025-01-13T21:37:11.599195573Z" level=info msg="StopPodSandbox for \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\"" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.640 [WARNING][5476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4z986-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85", Pod:"coredns-7db6d8ff4d-4z986", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4b1cc66193", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.640 [INFO][5476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.640 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" iface="eth0" netns="" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.641 [INFO][5476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.641 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.671 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.671 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.671 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.679 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.679 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.682 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.687388 containerd[1434]: 2025-01-13 21:37:11.685 [INFO][5476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.687388 containerd[1434]: time="2025-01-13T21:37:11.687222394Z" level=info msg="TearDown network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\" successfully" Jan 13 21:37:11.687388 containerd[1434]: time="2025-01-13T21:37:11.687267309Z" level=info msg="StopPodSandbox for \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\" returns successfully" Jan 13 21:37:11.687961 containerd[1434]: time="2025-01-13T21:37:11.687899572Z" level=info msg="RemovePodSandbox for \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\"" Jan 13 21:37:11.687999 containerd[1434]: time="2025-01-13T21:37:11.687959526Z" level=info msg="Forcibly stopping sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\"" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.736 [WARNING][5506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4z986-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b2fa236-b9af-4d0f-a29e-6bc43e2ce6d1", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f12c25c3c7a814fe770792d35e248bbf17513fadaa0605167a7b66e5cc507f85", Pod:"coredns-7db6d8ff4d-4z986", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4b1cc66193", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.736 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.736 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" iface="eth0" netns="" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.736 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.736 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.759 [INFO][5514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.759 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.759 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.767 [WARNING][5514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.767 [INFO][5514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" HandleID="k8s-pod-network.dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Workload="localhost-k8s-coredns--7db6d8ff4d--4z986-eth0" Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.768 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.772343 containerd[1434]: 2025-01-13 21:37:11.770 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb" Jan 13 21:37:11.772343 containerd[1434]: time="2025-01-13T21:37:11.771962595Z" level=info msg="TearDown network for sandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\" successfully" Jan 13 21:37:11.775705 containerd[1434]: time="2025-01-13T21:37:11.775655177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:37:11.775797 containerd[1434]: time="2025-01-13T21:37:11.775721451Z" level=info msg="RemovePodSandbox \"dbbb67243390b83c4ddc531e1394d888156aaa19d8573f76452be004edf6ddfb\" returns successfully" Jan 13 21:37:11.776298 containerd[1434]: time="2025-01-13T21:37:11.776276040Z" level=info msg="StopPodSandbox for \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\"" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.814 [WARNING][5537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0", GenerateName:"calico-kube-controllers-c685fc75-", Namespace:"calico-system", SelfLink:"", UID:"6031707c-fbd2-45fb-819f-7634d8a3b502", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c685fc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec", Pod:"calico-kube-controllers-c685fc75-cgwpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali034153ba86e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.814 [INFO][5537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.814 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" iface="eth0" netns="" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.814 [INFO][5537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.814 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.833 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.833 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.833 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.841 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.841 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.843 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.846103 containerd[1434]: 2025-01-13 21:37:11.844 [INFO][5537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.847216 containerd[1434]: time="2025-01-13T21:37:11.846172161Z" level=info msg="TearDown network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\" successfully" Jan 13 21:37:11.847216 containerd[1434]: time="2025-01-13T21:37:11.846197879Z" level=info msg="StopPodSandbox for \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\" returns successfully" Jan 13 21:37:11.847216 containerd[1434]: time="2025-01-13T21:37:11.846664036Z" level=info msg="RemovePodSandbox for \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\"" Jan 13 21:37:11.847216 containerd[1434]: time="2025-01-13T21:37:11.846697073Z" level=info msg="Forcibly stopping sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\"" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.879 [WARNING][5568] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0", GenerateName:"calico-kube-controllers-c685fc75-", Namespace:"calico-system", SelfLink:"", UID:"6031707c-fbd2-45fb-819f-7634d8a3b502", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c685fc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a6e460f453254c71f2bd26d410a361e1d701110ba5c9df283f91bfb8b4eacec", Pod:"calico-kube-controllers-c685fc75-cgwpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali034153ba86e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.880 [INFO][5568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.880 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" iface="eth0" netns="" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.880 [INFO][5568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.880 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.901 [INFO][5575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.901 [INFO][5575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.901 [INFO][5575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.909 [WARNING][5575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.909 [INFO][5575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" HandleID="k8s-pod-network.42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Workload="localhost-k8s-calico--kube--controllers--c685fc75--cgwpm-eth0" Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.911 [INFO][5575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:37:11.914255 containerd[1434]: 2025-01-13 21:37:11.912 [INFO][5568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7" Jan 13 21:37:11.914788 containerd[1434]: time="2025-01-13T21:37:11.914289685Z" level=info msg="TearDown network for sandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\" successfully" Jan 13 21:37:11.943300 containerd[1434]: time="2025-01-13T21:37:11.943247074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:37:11.943671 containerd[1434]: time="2025-01-13T21:37:11.943330386Z" level=info msg="RemovePodSandbox \"42a98326c1159d63275d7b3b2fdd3c704bd6cf052f1aec3ee9e0f7ab0591c8c7\" returns successfully" Jan 13 21:37:13.206048 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:47880.service - OpenSSH per-connection server daemon (10.0.0.1:47880). Jan 13 21:37:13.257334 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 47880 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:13.258884 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:13.263030 systemd-logind[1415]: New session 16 of user core. Jan 13 21:37:13.278201 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:37:13.450554 sshd[5583]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:13.461809 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:47880.service: Deactivated successfully. Jan 13 21:37:13.463844 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:37:13.465124 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:37:13.469750 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:47892.service - OpenSSH per-connection server daemon (10.0.0.1:47892). Jan 13 21:37:13.471062 systemd-logind[1415]: Removed session 16. Jan 13 21:37:13.503704 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 47892 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:13.504883 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:13.508322 systemd-logind[1415]: New session 17 of user core. Jan 13 21:37:13.518467 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:37:13.722076 sshd[5597]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:13.728959 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:47892.service: Deactivated successfully. Jan 13 21:37:13.731873 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:37:13.733169 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:37:13.737515 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:47902.service - OpenSSH per-connection server daemon (10.0.0.1:47902). Jan 13 21:37:13.738509 systemd-logind[1415]: Removed session 17. Jan 13 21:37:13.780916 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 47902 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:13.782691 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:13.787294 systemd-logind[1415]: New session 18 of user core. Jan 13 21:37:13.791418 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:37:15.269957 sshd[5612]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:15.277654 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:47902.service: Deactivated successfully. Jan 13 21:37:15.280427 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:37:15.283862 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:37:15.293556 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:47914.service - OpenSSH per-connection server daemon (10.0.0.1:47914). Jan 13 21:37:15.295750 systemd-logind[1415]: Removed session 18. Jan 13 21:37:15.330941 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 47914 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:15.332331 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:15.336126 systemd-logind[1415]: New session 19 of user core. Jan 13 21:37:15.351446 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:37:15.615378 sshd[5639]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:15.623928 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:47914.service: Deactivated successfully. Jan 13 21:37:15.626866 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:37:15.629202 systemd-logind[1415]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:37:15.636509 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:47926.service - OpenSSH per-connection server daemon (10.0.0.1:47926). Jan 13 21:37:15.638542 systemd-logind[1415]: Removed session 19. Jan 13 21:37:15.670361 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 47926 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:15.672853 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:15.677518 systemd-logind[1415]: New session 20 of user core. Jan 13 21:37:15.689441 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:37:15.836179 sshd[5653]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:15.839870 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:47926.service: Deactivated successfully. Jan 13 21:37:15.841518 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:37:15.842096 systemd-logind[1415]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:37:15.842878 systemd-logind[1415]: Removed session 20. Jan 13 21:37:20.848115 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:47934.service - OpenSSH per-connection server daemon (10.0.0.1:47934). Jan 13 21:37:20.890297 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 47934 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:20.891897 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:20.898624 systemd-logind[1415]: New session 21 of user core. Jan 13 21:37:20.906434 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:37:21.051318 sshd[5689]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:21.055137 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:47934.service: Deactivated successfully. Jan 13 21:37:21.056824 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:37:21.057622 systemd-logind[1415]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:37:21.058327 systemd-logind[1415]: Removed session 21. Jan 13 21:37:26.061951 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:56718.service - OpenSSH per-connection server daemon (10.0.0.1:56718). Jan 13 21:37:26.101140 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 56718 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:26.102347 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:26.105783 systemd-logind[1415]: New session 22 of user core. Jan 13 21:37:26.116376 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:37:26.251480 sshd[5706]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:26.254896 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:56718.service: Deactivated successfully. Jan 13 21:37:26.256708 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:37:26.258617 systemd-logind[1415]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:37:26.259400 systemd-logind[1415]: Removed session 22. Jan 13 21:37:29.928515 kubelet[2550]: E0113 21:37:29.928474 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:37:31.261867 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:56722.service - OpenSSH per-connection server daemon (10.0.0.1:56722). Jan 13 21:37:31.300592 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 56722 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:37:31.301863 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:37:31.305386 systemd-logind[1415]: New session 23 of user core. Jan 13 21:37:31.314393 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:37:31.434935 sshd[5724]: pam_unix(sshd:session): session closed for user core Jan 13 21:37:31.437472 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:56722.service: Deactivated successfully. Jan 13 21:37:31.439471 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:37:31.440723 systemd-logind[1415]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:37:31.441457 systemd-logind[1415]: Removed session 23. Jan 13 21:37:31.929373 kubelet[2550]: E0113 21:37:31.929332 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"