Jan 30 13:00:05.973708 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:00:05.973730 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 13:00:05.973772 kernel: KASLR enabled Jan 30 13:00:05.973777 kernel: efi: EFI v2.7 by EDK II Jan 30 13:00:05.973783 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 13:00:05.973789 kernel: random: crng init done Jan 30 13:00:05.973797 kernel: ACPI: Early table checksum verification disabled Jan 30 13:00:05.973803 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 13:00:05.973814 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:00:05.973822 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973829 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973835 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973841 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973847 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973854 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973873 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973880 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973887 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:00:05.973893 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:00:05.973899 kernel: NUMA: Failed to initialise from firmware Jan 30 13:00:05.973906 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:00:05.973912 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 13:00:05.973947 kernel: Zone ranges: Jan 30 13:00:05.973953 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:00:05.973959 kernel: DMA32 empty Jan 30 13:00:05.973967 kernel: Normal empty Jan 30 13:00:05.973973 kernel: Movable zone start for each node Jan 30 13:00:05.973980 kernel: Early memory node ranges Jan 30 13:00:05.973986 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 13:00:05.973993 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:00:05.973999 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:00:05.974005 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:00:05.974012 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:00:05.974018 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:00:05.974025 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:00:05.974031 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:00:05.974038 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:00:05.974046 kernel: psci: probing for conduit method from ACPI. Jan 30 13:00:05.974053 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:00:05.974060 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:00:05.974105 kernel: psci: Trusted OS migration not required Jan 30 13:00:05.974114 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:00:05.974121 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:00:05.974130 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:00:05.974144 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:00:05.974152 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:00:05.974165 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:00:05.974174 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:00:05.974181 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:00:05.974188 kernel: CPU features: detected: Spectre-v4 Jan 30 13:00:05.974195 kernel: CPU features: detected: Spectre-BHB Jan 30 13:00:05.974202 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:00:05.974209 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:00:05.974221 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:00:05.974246 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:00:05.974253 kernel: alternatives: applying boot alternatives Jan 30 13:00:05.974261 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 13:00:05.974268 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:00:05.974280 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:00:05.974287 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:00:05.974295 kernel: Fallback order for Node 0: 0 Jan 30 13:00:05.974302 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:00:05.974308 kernel: Policy zone: DMA Jan 30 13:00:05.974315 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:00:05.974323 kernel: software IO TLB: area num 4. Jan 30 13:00:05.974330 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:00:05.974351 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 30 13:00:05.974358 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:00:05.974387 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:00:05.974396 kernel: rcu: RCU event tracing is enabled. Jan 30 13:00:05.974409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:00:05.974416 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:00:05.974423 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:00:05.974430 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:00:05.974437 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:00:05.974443 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:00:05.974453 kernel: GICv3: 256 SPIs implemented Jan 30 13:00:05.974459 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:00:05.974466 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:00:05.974473 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:00:05.974479 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:00:05.974486 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:00:05.974493 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:00:05.974500 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:00:05.974507 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:00:05.974513 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:00:05.974520 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:00:05.974528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:00:05.974535 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:00:05.974542 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:00:05.974549 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:00:05.974556 kernel: arm-pv: using stolen time PV Jan 30 13:00:05.974563 kernel: Console: colour dummy device 80x25 Jan 30 13:00:05.974570 kernel: ACPI: Core revision 20230628 Jan 30 13:00:05.974577 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:00:05.974584 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:00:05.974591 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:00:05.974600 kernel: landlock: Up and running. Jan 30 13:00:05.974607 kernel: SELinux: Initializing. Jan 30 13:00:05.974614 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:00:05.974621 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:00:05.974628 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:00:05.974635 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:00:05.974642 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:00:05.974649 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:00:05.974656 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:00:05.974665 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:00:05.974672 kernel: Remapping and enabling EFI services. Jan 30 13:00:05.974679 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:00:05.974685 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:00:05.974692 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:00:05.974700 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:00:05.974726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:00:05.974749 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:00:05.974756 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:00:05.974763 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:00:05.974772 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:00:05.974784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:00:05.974796 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:00:05.974805 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:00:05.974812 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:00:05.974820 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:00:05.974827 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:00:05.974834 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:00:05.974842 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:00:05.974851 kernel: SMP: Total of 4 processors activated. Jan 30 13:00:05.974858 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:00:05.974866 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:00:05.974873 kernel: CPU features: detected: Common not Private translations Jan 30 13:00:05.974889 kernel: CPU features: detected: CRC32 instructions Jan 30 13:00:05.974897 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:00:05.974904 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:00:05.974911 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:00:05.974920 kernel: CPU features: detected: Privileged Access Never Jan 30 13:00:05.974928 kernel: CPU features: detected: RAS Extension Support Jan 30 13:00:05.974935 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:00:05.974943 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:00:05.974950 kernel: alternatives: applying system-wide alternatives Jan 30 13:00:05.974957 kernel: devtmpfs: initialized Jan 30 13:00:05.974964 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:00:05.974972 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:00:05.974979 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:00:05.974988 kernel: SMBIOS 3.0.0 present. Jan 30 13:00:05.974996 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 13:00:05.975003 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:00:05.975010 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:00:05.975017 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:00:05.975025 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:00:05.975032 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:00:05.975039 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 30 13:00:05.975046 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:00:05.975055 kernel: cpuidle: using governor menu Jan 30 13:00:05.975062 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:00:05.975070 kernel: ASID allocator initialised with 32768 entries Jan 30 13:00:05.975077 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:00:05.975084 kernel: Serial: AMBA PL011 UART driver Jan 30 13:00:05.975091 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:00:05.975098 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:00:05.975106 kernel: Modules: 509040 pages in range for PLT usage Jan 30 13:00:05.975113 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:00:05.975125 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:00:05.975132 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:00:05.975140 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:00:05.975148 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:00:05.975155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:00:05.975167 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:00:05.975174 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:00:05.975181 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:00:05.975189 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:00:05.975198 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:00:05.975219 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:00:05.975226 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:00:05.975233 kernel: ACPI: Interpreter enabled Jan 30 13:00:05.975242 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:00:05.975249 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:00:05.975256 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:00:05.975264 kernel: printk: console [ttyAMA0] enabled Jan 30 13:00:05.975271 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:00:05.975429 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:00:05.975510 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:00:05.975576 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:00:05.975641 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:00:05.975705 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:00:05.975715 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:00:05.975722 kernel: PCI host bridge to bus 0000:00 Jan 30 13:00:05.975808 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:00:05.975869 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:00:05.975929 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:00:05.975988 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:00:05.976067 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:00:05.976144 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:00:05.976228 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:00:05.976304 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:00:05.976393 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:00:05.976465 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:00:05.976534 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:00:05.976617 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:00:05.976681 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:00:05.976747 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:00:05.976809 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:00:05.976819 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:00:05.976827 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:00:05.976835 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:00:05.976842 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:00:05.976850 kernel: iommu: Default domain type: Translated Jan 30 13:00:05.976858 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:00:05.976867 kernel: efivars: Registered efivars operations Jan 30 13:00:05.976875 kernel: vgaarb: loaded Jan 30 13:00:05.976882 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:00:05.976890 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:00:05.976898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:00:05.976905 kernel: pnp: PnP ACPI init Jan 30 13:00:05.976991 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:00:05.977002 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:00:05.977009 kernel: NET: Registered PF_INET protocol family Jan 30 13:00:05.977019 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:00:05.977027 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:00:05.977034 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:00:05.977042 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:00:05.977050 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:00:05.977057 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:00:05.977065 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:00:05.977072 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:00:05.977082 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:00:05.977089 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:00:05.977097 kernel: kvm [1]: HYP mode not available Jan 30 13:00:05.977104 kernel: Initialise system trusted keyrings Jan 30 13:00:05.977112 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:00:05.977119 kernel: Key type asymmetric registered Jan 30 13:00:05.977126 kernel: Asymmetric key parser 'x509' registered Jan 30 13:00:05.977134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:00:05.977141 kernel: io scheduler mq-deadline registered Jan 30 13:00:05.977149 kernel: io scheduler kyber registered Jan 30 13:00:05.977158 kernel: io scheduler bfq registered Jan 30 13:00:05.977171 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:00:05.977179 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:00:05.977187 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:00:05.977262 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:00:05.977272 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:00:05.977279 kernel: thunder_xcv, ver 1.0 Jan 30 13:00:05.977287 kernel: thunder_bgx, ver 1.0 Jan 30 13:00:05.977294 kernel: nicpf, ver 1.0 Jan 30 13:00:05.977304 kernel: nicvf, ver 1.0 Jan 30 13:00:05.977536 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:00:05.977615 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:00:05 UTC (1738242005) Jan 30 13:00:05.977626 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:00:05.977634 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:00:05.977641 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:00:05.977649 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:00:05.977657 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:00:05.977669 kernel: Segment Routing with IPv6 Jan 30 13:00:05.977679 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:00:05.977687 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:00:05.977694 kernel: Key type dns_resolver registered Jan 30 13:00:05.977702 kernel: registered taskstats version 1 Jan 30 13:00:05.977709 kernel: Loading compiled-in X.509 certificates Jan 30 13:00:05.977717 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 13:00:05.977725 kernel: Key type .fscrypt registered Jan 30 13:00:05.977732 kernel: Key type fscrypt-provisioning registered Jan 30 13:00:05.977742 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:00:05.977749 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:00:05.977757 kernel: ima: No architecture policies found Jan 30 13:00:05.977765 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:00:05.977772 kernel: clk: Disabling unused clocks Jan 30 13:00:05.977779 kernel: Freeing unused kernel memory: 39360K Jan 30 13:00:05.977787 kernel: Run /init as init process Jan 30 13:00:05.977794 kernel: with arguments: Jan 30 13:00:05.977802 kernel: /init Jan 30 13:00:05.977811 kernel: with environment: Jan 30 13:00:05.977818 kernel: HOME=/ Jan 30 13:00:05.977825 kernel: TERM=linux Jan 30 13:00:05.977832 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:00:05.977842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:00:05.977852 systemd[1]: Detected virtualization kvm. Jan 30 13:00:05.977860 systemd[1]: Detected architecture arm64. Jan 30 13:00:05.977870 systemd[1]: Running in initrd. Jan 30 13:00:05.977880 systemd[1]: No hostname configured, using default hostname. Jan 30 13:00:05.977888 systemd[1]: Hostname set to . Jan 30 13:00:05.977897 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:00:05.977905 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:00:05.977913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:00:05.977921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:00:05.977930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:00:05.977939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:00:05.977948 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:00:05.977956 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:00:05.977965 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:00:05.977974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:00:05.977982 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:00:05.977990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:00:05.978000 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:00:05.978008 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:00:05.978017 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:00:05.978025 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:00:05.978033 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:00:05.978041 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:00:05.978049 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:00:05.978057 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:00:05.978065 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:00:05.978076 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:00:05.978084 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:00:05.978092 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:00:05.978100 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:00:05.978108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:00:05.978116 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:00:05.978125 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:00:05.978133 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:00:05.978142 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:00:05.978150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:00:05.978159 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:00:05.978176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:00:05.978184 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:00:05.978193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:00:05.978204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:00:05.978234 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 13:00:05.978254 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:00:05.978264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:00:05.978273 systemd-journald[238]: Journal started Jan 30 13:00:05.978293 systemd-journald[238]: Runtime Journal (/run/log/journal/5b19f913b4704ad987a11d1c6eb054a7) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:00:05.969154 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 13:00:05.980221 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:00:05.985392 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:00:05.985436 kernel: Bridge firewalling registered Jan 30 13:00:05.983066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:00:05.985528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:00:05.985861 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 13:00:05.987052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:00:05.990539 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:00:05.995778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:00:06.002194 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:00:06.006511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:00:06.026575 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:00:06.027817 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:00:06.031963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:00:06.036396 dracut-cmdline[274]: dracut-dracut-053 Jan 30 13:00:06.044743 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 13:00:06.062753 systemd-resolved[279]: Positive Trust Anchors: Jan 30 13:00:06.062775 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:00:06.062808 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:00:06.067743 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 30 13:00:06.069904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:00:06.070908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:00:06.116392 kernel: SCSI subsystem initialized Jan 30 13:00:06.121383 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:00:06.128409 kernel: iscsi: registered transport (tcp) Jan 30 13:00:06.141386 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:00:06.141435 kernel: QLogic iSCSI HBA Driver Jan 30 13:00:06.184435 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:00:06.191572 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:00:06.210025 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:00:06.210107 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:00:06.211384 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:00:06.259397 kernel: raid6: neonx8 gen() 15786 MB/s Jan 30 13:00:06.276385 kernel: raid6: neonx4 gen() 15644 MB/s Jan 30 13:00:06.293384 kernel: raid6: neonx2 gen() 13236 MB/s Jan 30 13:00:06.310381 kernel: raid6: neonx1 gen() 10485 MB/s Jan 30 13:00:06.327381 kernel: raid6: int64x8 gen() 6966 MB/s Jan 30 13:00:06.344380 kernel: raid6: int64x4 gen() 7344 MB/s Jan 30 13:00:06.361392 kernel: raid6: int64x2 gen() 6125 MB/s Jan 30 13:00:06.378386 kernel: raid6: int64x1 gen() 5053 MB/s Jan 30 13:00:06.378405 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Jan 30 13:00:06.395389 kernel: raid6: .... xor() 11908 MB/s, rmw enabled Jan 30 13:00:06.395404 kernel: raid6: using neon recovery algorithm Jan 30 13:00:06.402805 kernel: xor: measuring software checksum speed Jan 30 13:00:06.402829 kernel: 8regs : 19797 MB/sec Jan 30 13:00:06.403398 kernel: 32regs : 19641 MB/sec Jan 30 13:00:06.404406 kernel: arm64_neon : 26998 MB/sec Jan 30 13:00:06.404420 kernel: xor: using function: arm64_neon (26998 MB/sec) Jan 30 13:00:06.461406 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:00:06.472903 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:00:06.488556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:00:06.501023 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 30 13:00:06.504375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:00:06.521615 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:00:06.535564 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 30 13:00:06.565201 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:00:06.577594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:00:06.621837 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:00:06.630814 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:00:06.645799 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:00:06.648409 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:00:06.649339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:00:06.652333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:00:06.660538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:00:06.672037 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:00:06.680396 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:00:06.691770 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:00:06.691877 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:00:06.691889 kernel: GPT:9289727 != 19775487 Jan 30 13:00:06.691906 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:00:06.691917 kernel: GPT:9289727 != 19775487 Jan 30 13:00:06.691927 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:00:06.691950 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:00:06.688594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:00:06.688747 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:00:06.692881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:00:06.698412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:00:06.698588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:00:06.700415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:00:06.712712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:00:06.722458 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Jan 30 13:00:06.722480 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (509) Jan 30 13:00:06.728137 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:00:06.730246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:00:06.735110 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:00:06.745305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:00:06.749177 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:00:06.750195 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:00:06.759567 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:00:06.761214 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:00:06.767152 disk-uuid[552]: Primary Header is updated. Jan 30 13:00:06.767152 disk-uuid[552]: Secondary Entries is updated. Jan 30 13:00:06.767152 disk-uuid[552]: Secondary Header is updated. Jan 30 13:00:06.770394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:00:06.785552 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:00:07.793402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:00:07.793803 disk-uuid[553]: The operation has completed successfully. Jan 30 13:00:07.820579 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:00:07.820680 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:00:07.835594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:00:07.838538 sh[577]: Success Jan 30 13:00:07.848390 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:00:07.905032 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:00:07.906791 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:00:07.908432 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:00:07.924493 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 13:00:07.924552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:00:07.926679 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:00:07.926719 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:00:07.926731 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:00:07.932724 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:00:07.933955 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:00:07.939535 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:00:07.940968 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:00:07.952876 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:00:07.952934 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:00:07.952945 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:00:07.956497 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:00:07.964733 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:00:07.966442 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:00:07.976756 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:00:07.984589 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:00:08.051416 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:00:08.061585 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:00:08.099707 systemd-networkd[760]: lo: Link UP Jan 30 13:00:08.099716 systemd-networkd[760]: lo: Gained carrier Jan 30 13:00:08.100521 systemd-networkd[760]: Enumeration completed Jan 30 13:00:08.100660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:00:08.101163 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:00:08.101175 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:00:08.102162 systemd-networkd[760]: eth0: Link UP Jan 30 13:00:08.102174 systemd-networkd[760]: eth0: Gained carrier Jan 30 13:00:08.102183 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:00:08.102777 systemd[1]: Reached target network.target - Network. Jan 30 13:00:08.125295 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:00:08.132984 ignition[678]: Ignition 2.19.0 Jan 30 13:00:08.132994 ignition[678]: Stage: fetch-offline Jan 30 13:00:08.133039 ignition[678]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:00:08.133050 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:00:08.133209 ignition[678]: parsed url from cmdline: "" Jan 30 13:00:08.133212 ignition[678]: no config URL provided Jan 30 13:00:08.133217 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:00:08.133223 ignition[678]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:00:08.133251 ignition[678]: op(1): [started] loading QEMU firmware config module Jan 30 13:00:08.133261 ignition[678]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:00:08.147279 ignition[678]: op(1): [finished] loading QEMU firmware config module Jan 30 13:00:08.147354 ignition[678]: QEMU firmware config was not found. Ignoring... Jan 30 13:00:08.192969 ignition[678]: parsing config with SHA512: cad73d12bc9e93ed2816e2a368192230e99c523f94eb2f0db11489e679dc73ed0ddea3a837f8f71f00b5431487ffadb855c6e2e97d74a01edd9a2c635abc7158 Jan 30 13:00:08.197962 unknown[678]: fetched base config from "system" Jan 30 13:00:08.197985 unknown[678]: fetched user config from "qemu" Jan 30 13:00:08.198527 ignition[678]: fetch-offline: fetch-offline passed Jan 30 13:00:08.200676 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:00:08.198614 ignition[678]: Ignition finished successfully Jan 30 13:00:08.202092 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:00:08.215569 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:00:08.226057 ignition[773]: Ignition 2.19.0 Jan 30 13:00:08.226068 ignition[773]: Stage: kargs Jan 30 13:00:08.226254 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:00:08.226263 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:00:08.227128 ignition[773]: kargs: kargs passed Jan 30 13:00:08.227184 ignition[773]: Ignition finished successfully Jan 30 13:00:08.229273 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:00:08.231400 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:00:08.245415 ignition[782]: Ignition 2.19.0 Jan 30 13:00:08.245427 ignition[782]: Stage: disks Jan 30 13:00:08.245605 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:00:08.245615 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:00:08.246507 ignition[782]: disks: disks passed Jan 30 13:00:08.248027 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:00:08.246557 ignition[782]: Ignition finished successfully Jan 30 13:00:08.249463 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:00:08.250882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:00:08.252345 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:00:08.253899 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:00:08.255484 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:00:08.270613 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:00:08.280500 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:00:08.284192 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:00:08.286529 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:00:08.333386 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 13:00:08.333513 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:00:08.334675 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:00:08.351490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:00:08.353214 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:00:08.354420 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:00:08.354472 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:00:08.354512 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:00:08.366608 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jan 30 13:00:08.366631 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:00:08.366642 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:00:08.362707 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:00:08.369383 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:00:08.365627 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:00:08.372596 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:00:08.373465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:00:08.408465 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:00:08.412770 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:00:08.416377 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:00:08.420552 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:00:08.507593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:00:08.528513 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:00:08.530015 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:00:08.535381 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:00:08.551015 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:00:08.554636 ignition[915]: INFO : Ignition 2.19.0 Jan 30 13:00:08.554636 ignition[915]: INFO : Stage: mount Jan 30 13:00:08.556065 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:00:08.556065 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:00:08.556065 ignition[915]: INFO : mount: mount passed Jan 30 13:00:08.556065 ignition[915]: INFO : Ignition finished successfully Jan 30 13:00:08.557852 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:00:08.568535 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:00:08.923571 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:00:08.935586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:00:08.943407 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Jan 30 13:00:08.945623 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 13:00:08.945658 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:00:08.945670 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:00:08.947395 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:00:08.948965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:00:08.972409 ignition[948]: INFO : Ignition 2.19.0 Jan 30 13:00:08.972409 ignition[948]: INFO : Stage: files Jan 30 13:00:08.973796 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:00:08.973796 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:00:08.973796 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:00:08.979106 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:00:08.979106 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:00:08.979106 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:00:08.979106 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:00:08.983672 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:00:08.983672 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:00:08.983672 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:00:08.979258 unknown[948]: wrote ssh authorized keys file for user: core Jan 30 13:00:09.030474 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:00:09.221994 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:00:09.221994 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:00:09.225229 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 13:00:09.417729 systemd-networkd[760]: eth0: Gained IPv6LL Jan 30 13:00:09.474789 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:00:09.714818 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:00:09.714818 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:00:09.717639 ignition[948]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:00:09.754259 ignition[948]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:00:09.758722 ignition[948]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:00:09.760593 ignition[948]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:00:09.760593 ignition[948]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:00:09.760593 ignition[948]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:00:09.760593 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:00:09.760593 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:00:09.760593 ignition[948]: INFO : files: files passed Jan 30 13:00:09.760593 ignition[948]: INFO : Ignition finished successfully Jan 30 13:00:09.762655 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:00:09.774599 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:00:09.776730 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:00:09.779354 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:00:09.779504 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:00:09.786519 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:00:09.790073 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:00:09.790073 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:00:09.792675 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:00:09.793399 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:00:09.795584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:00:09.807610 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:00:09.832902 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:00:09.833049 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:00:09.834934 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:00:09.836212 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:00:09.837760 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:00:09.838748 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:00:09.860513 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:00:09.878598 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:00:09.887671 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:00:09.888766 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:00:09.890474 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:00:09.892047 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:00:09.892193 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:00:09.894271 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:00:09.895980 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:00:09.897393 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:00:09.898844 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:00:09.900415 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:00:09.901982 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:00:09.903444 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:00:09.905122 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:00:09.906799 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:00:09.908263 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:00:09.909655 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:00:09.909797 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:00:09.911818 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:00:09.913486 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:00:09.915234 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:00:09.915452 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:00:09.917112 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:00:09.917254 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:00:09.919723 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:00:09.919844 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:00:09.921472 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:00:09.922798 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:00:09.927459 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:00:09.929540 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:00:09.930396 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:00:09.931613 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:00:09.931709 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:00:09.932931 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:00:09.933009 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:00:09.934244 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:00:09.934358 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:00:09.935669 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:00:09.935767 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:00:09.946598 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:00:09.948031 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:00:09.948716 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:00:09.948841 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:00:09.950429 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:00:09.950526 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:00:09.955503 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:00:09.957401 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:00:09.959967 ignition[1003]: INFO : Ignition 2.19.0 Jan 30 13:00:09.959967 ignition[1003]: INFO : Stage: umount Jan 30 13:00:09.963541 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:00:09.963541 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:00:09.963541 ignition[1003]: INFO : umount: umount passed Jan 30 13:00:09.963541 ignition[1003]: INFO : Ignition finished successfully Jan 30 13:00:09.963047 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:00:09.963167 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:00:09.964412 systemd[1]: Stopped target network.target - Network. Jan 30 13:00:09.967234 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:00:09.967306 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:00:09.969108 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:00:09.969157 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:00:09.970389 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:00:09.970429 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:00:09.971987 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:00:09.972028 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:00:09.973658 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:00:09.975281 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:00:09.977605 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:00:09.985441 systemd-networkd[760]: eth0: DHCPv6 lease lost Jan 30 13:00:09.986616 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:00:09.986732 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:00:09.989389 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:00:09.989471 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:00:10.010512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:00:10.011309 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:00:10.011388 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:00:10.013319 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:00:10.015025 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:00:10.015129 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:00:10.021842 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:00:10.021910 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:00:10.025905 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:00:10.025980 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:00:10.027691 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:00:10.027740 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:00:10.030442 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:00:10.030575 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:00:10.035610 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:00:10.035772 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:00:10.040606 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:00:10.040695 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:00:10.042904 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:00:10.042983 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:00:10.044449 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:00:10.044478 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:00:10.045470 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:00:10.045524 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:00:10.047686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:00:10.047731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:00:10.050166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:00:10.050218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:00:10.052869 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:00:10.052912 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:00:10.070610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:00:10.071575 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:00:10.071644 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:00:10.073482 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:00:10.073523 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:00:10.075157 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:00:10.075209 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:00:10.077191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:00:10.077254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:00:10.080226 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:00:10.080333 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:00:10.082081 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:00:10.084506 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:00:10.100549 systemd[1]: Switching root. Jan 30 13:00:10.128599 systemd-journald[238]: Journal stopped Jan 30 13:00:10.946311 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 13:00:10.946393 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:00:10.946408 kernel: SELinux: policy capability open_perms=1 Jan 30 13:00:10.946419 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:00:10.946434 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:00:10.946445 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:00:10.946455 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:00:10.946469 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:00:10.946480 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:00:10.946490 kernel: audit: type=1403 audit(1738242010.267:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:00:10.946502 systemd[1]: Successfully loaded SELinux policy in 32.581ms. Jan 30 13:00:10.946524 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.038ms. Jan 30 13:00:10.946537 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:00:10.946549 systemd[1]: Detected virtualization kvm. Jan 30 13:00:10.946560 systemd[1]: Detected architecture arm64. Jan 30 13:00:10.946571 systemd[1]: Detected first boot. Jan 30 13:00:10.946583 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:00:10.946594 zram_generator::config[1048]: No configuration found. Jan 30 13:00:10.946606 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:00:10.946619 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:00:10.946633 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:00:10.946643 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:00:10.946655 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:00:10.946666 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:00:10.946679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:00:10.946690 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:00:10.946702 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:00:10.946713 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:00:10.946724 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:00:10.946735 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:00:10.946746 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:00:10.946758 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:00:10.946769 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:00:10.946782 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:00:10.946793 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:00:10.946807 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:00:10.946818 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:00:10.946829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:00:10.946840 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:00:10.946852 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:00:10.946863 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:00:10.946876 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:00:10.946888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:00:10.946899 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:00:10.946910 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:00:10.946921 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:00:10.946932 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:00:10.946945 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:00:10.946956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:00:10.946967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:00:10.946979 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:00:10.946990 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:00:10.947001 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:00:10.947012 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:00:10.947023 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:00:10.947033 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:00:10.947044 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:00:10.947054 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:00:10.947067 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:00:10.947079 systemd[1]: Reached target machines.target - Containers. Jan 30 13:00:10.947089 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:00:10.947100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:00:10.947112 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:00:10.947123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:00:10.947134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:00:10.947145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:00:10.947156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:00:10.947168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:00:10.947186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:00:10.947200 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:00:10.947211 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:00:10.947222 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:00:10.947233 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:00:10.947243 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:00:10.947254 kernel: loop: module loaded Jan 30 13:00:10.947266 kernel: fuse: init (API version 7.39) Jan 30 13:00:10.947276 kernel: ACPI: bus type drm_connector registered Jan 30 13:00:10.947287 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:00:10.947299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:00:10.947310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:00:10.947321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:00:10.947333 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:00:10.947343 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:00:10.947356 systemd[1]: Stopped verity-setup.service. Jan 30 13:00:10.947431 systemd-journald[1115]: Collecting audit messages is disabled. Jan 30 13:00:10.947460 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:00:10.947472 systemd-journald[1115]: Journal started Jan 30 13:00:10.947494 systemd-journald[1115]: Runtime Journal (/run/log/journal/5b19f913b4704ad987a11d1c6eb054a7) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:00:10.714006 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:00:10.742131 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:00:10.742595 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:00:10.950754 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:00:10.951498 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:00:10.952706 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:00:10.953848 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:00:10.955082 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:00:10.956283 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:00:10.958241 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:00:10.961423 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:00:10.962875 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:00:10.963044 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:00:10.964651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:00:10.964831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:00:10.967876 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:00:10.968198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:00:10.969808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:00:10.970090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:00:10.971953 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:00:10.972314 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:00:10.973858 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:00:10.974139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:00:10.975644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:00:10.977221 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:00:10.979089 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:00:10.993974 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:00:11.002553 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:00:11.005032 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:00:11.006118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:00:11.006173 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:00:11.008514 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:00:11.011148 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:00:11.013775 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:00:11.014943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:00:11.017798 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:00:11.023640 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:00:11.027523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:00:11.029753 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:00:11.031027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:00:11.034156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:00:11.035610 systemd-journald[1115]: Time spent on flushing to /var/log/journal/5b19f913b4704ad987a11d1c6eb054a7 is 19.961ms for 855 entries. Jan 30 13:00:11.035610 systemd-journald[1115]: System Journal (/var/log/journal/5b19f913b4704ad987a11d1c6eb054a7) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:00:11.077503 systemd-journald[1115]: Received client request to flush runtime journal. Jan 30 13:00:11.077598 kernel: loop0: detected capacity change from 0 to 194096 Jan 30 13:00:11.043054 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:00:11.048700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:00:11.054668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:00:11.056132 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:00:11.057780 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:00:11.061313 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:00:11.063110 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:00:11.071259 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:00:11.085060 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:00:11.089771 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:00:11.091636 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:00:11.095505 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:00:11.093266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:00:11.104684 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 30 13:00:11.104706 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 30 13:00:11.110794 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:00:11.117167 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:00:11.125129 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:00:11.157219 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:00:11.158039 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:00:11.163406 kernel: loop1: detected capacity change from 0 to 114328 Jan 30 13:00:11.177509 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:00:11.187905 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:00:11.196420 kernel: loop2: detected capacity change from 0 to 114432 Jan 30 13:00:11.204207 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 30 13:00:11.204229 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 30 13:00:11.210217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:00:11.228403 kernel: loop3: detected capacity change from 0 to 194096 Jan 30 13:00:11.236397 kernel: loop4: detected capacity change from 0 to 114328 Jan 30 13:00:11.242450 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 13:00:11.247868 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:00:11.248362 (sd-merge)[1188]: Merged extensions into '/usr'. Jan 30 13:00:11.252151 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:00:11.252170 systemd[1]: Reloading... Jan 30 13:00:11.319408 zram_generator::config[1213]: No configuration found. Jan 30 13:00:11.466634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:00:11.493439 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:00:11.504925 systemd[1]: Reloading finished in 252 ms. Jan 30 13:00:11.543914 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:00:11.545460 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:00:11.566657 systemd[1]: Starting ensure-sysext.service... Jan 30 13:00:11.568840 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:00:11.596096 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:00:11.596427 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:00:11.597130 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:00:11.597379 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 13:00:11.597435 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 13:00:11.600536 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:00:11.600548 systemd-tmpfiles[1249]: Skipping /boot Jan 30 13:00:11.601408 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:00:11.601423 systemd[1]: Reloading... Jan 30 13:00:11.608731 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:00:11.608747 systemd-tmpfiles[1249]: Skipping /boot Jan 30 13:00:11.669395 zram_generator::config[1279]: No configuration found. Jan 30 13:00:11.783113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:00:11.822564 systemd[1]: Reloading finished in 220 ms. Jan 30 13:00:11.844656 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:00:11.857356 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:00:11.866721 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:00:11.870173 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:00:11.873001 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:00:11.877912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:00:11.903693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:00:11.910218 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:00:11.917034 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:00:11.919765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:00:11.934894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:00:11.942173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:00:11.945897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:00:11.947133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:00:11.947656 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jan 30 13:00:11.951791 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:00:11.958689 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:00:11.960961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:00:11.961211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:00:11.963029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:00:11.963197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:00:11.964982 augenrules[1339]: No rules Jan 30 13:00:11.965193 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:00:11.965383 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:00:11.967241 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:00:11.971606 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:00:11.981317 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:00:11.992659 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:00:11.999561 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:00:12.007728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:00:12.010078 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:00:12.012717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:00:12.015500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:00:12.016671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:00:12.016883 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:00:12.020174 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:00:12.021930 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:00:12.023446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:00:12.023643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:00:12.027956 systemd[1]: Finished ensure-sysext.service. Jan 30 13:00:12.046684 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:00:12.053274 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:00:12.056916 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:00:12.057094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:00:12.058611 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:00:12.058771 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:00:12.069655 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:00:12.073445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:00:12.073637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:00:12.075023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:00:12.091571 systemd-resolved[1316]: Positive Trust Anchors: Jan 30 13:00:12.091597 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:00:12.091631 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:00:12.096073 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:00:12.100435 systemd-resolved[1316]: Defaulting to hostname 'linux'. Jan 30 13:00:12.107090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:00:12.108294 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:00:12.135969 systemd-networkd[1376]: lo: Link UP Jan 30 13:00:12.135981 systemd-networkd[1376]: lo: Gained carrier Jan 30 13:00:12.138632 systemd-networkd[1376]: Enumeration completed Jan 30 13:00:12.138804 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:00:12.141045 systemd[1]: Reached target network.target - Network. Jan 30 13:00:12.141383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Jan 30 13:00:12.142509 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:00:12.142521 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:00:12.151160 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:00:12.151248 systemd-networkd[1376]: eth0: Link UP Jan 30 13:00:12.151252 systemd-networkd[1376]: eth0: Gained carrier Jan 30 13:00:12.151262 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:00:12.154598 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:00:12.170496 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:00:12.181805 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:00:12.183319 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:00:12.183590 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:00:12.184288 systemd-timesyncd[1381]: Initial clock synchronization to Thu 2025-01-30 13:00:12.319645 UTC. Jan 30 13:00:12.191339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:00:12.200741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:00:12.207962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:00:12.216427 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:00:12.219977 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:00:12.221604 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:00:12.246929 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:00:12.269055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:00:12.277217 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:00:12.278662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:00:12.279607 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:00:12.280539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:00:12.281541 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:00:12.282840 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:00:12.283829 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:00:12.284801 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:00:12.285708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:00:12.285751 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:00:12.286495 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:00:12.288211 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:00:12.290893 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:00:12.307664 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:00:12.310076 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:00:12.311612 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:00:12.312654 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:00:12.313384 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:00:12.314110 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:00:12.314146 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:00:12.315364 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:00:12.317542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:00:12.319735 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:00:12.321691 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:00:12.324646 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:00:12.325770 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:00:12.329704 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:00:12.333651 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:00:12.336683 jq[1414]: false Jan 30 13:00:12.338744 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:00:12.351662 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:00:12.356230 extend-filesystems[1415]: Found loop3 Jan 30 13:00:12.356230 extend-filesystems[1415]: Found loop4 Jan 30 13:00:12.356230 extend-filesystems[1415]: Found loop5 Jan 30 13:00:12.356230 extend-filesystems[1415]: Found vda Jan 30 13:00:12.356230 extend-filesystems[1415]: Found vda1 Jan 30 13:00:12.356230 extend-filesystems[1415]: Found vda2 Jan 30 13:00:12.356230 extend-filesystems[1415]: Found vda3 Jan 30 13:00:12.356230 extend-filesystems[1415]: Found usr Jan 30 13:00:12.364342 extend-filesystems[1415]: Found vda4 Jan 30 13:00:12.364342 extend-filesystems[1415]: Found vda6 Jan 30 13:00:12.364342 extend-filesystems[1415]: Found vda7 Jan 30 13:00:12.364342 extend-filesystems[1415]: Found vda9 Jan 30 13:00:12.364342 extend-filesystems[1415]: Checking size of /dev/vda9 Jan 30 13:00:12.358716 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:00:12.366747 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:00:12.367927 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:00:12.369014 dbus-daemon[1413]: [system] SELinux support is enabled Jan 30 13:00:12.377667 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:00:12.380545 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:00:12.382286 extend-filesystems[1415]: Resized partition /dev/vda9 Jan 30 13:00:12.385132 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:00:12.388204 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:00:12.393756 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:00:12.392901 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:00:12.393145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:00:12.393553 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:00:12.393906 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:00:12.402198 jq[1435]: true Jan 30 13:00:12.402498 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:00:12.399460 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:00:12.399700 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:00:12.425206 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:00:12.425253 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:00:12.429664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:00:12.429695 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:00:12.430349 jq[1440]: true Jan 30 13:00:12.430602 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1367) Jan 30 13:00:12.434942 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:00:12.437319 systemd-logind[1427]: New seat seat0. Jan 30 13:00:12.438325 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:00:12.440299 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:00:12.448285 tar[1439]: linux-arm64/helm Jan 30 13:00:12.458161 update_engine[1432]: I20250130 13:00:12.457777 1432 main.cc:92] Flatcar Update Engine starting Jan 30 13:00:12.464244 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:00:12.464457 update_engine[1432]: I20250130 13:00:12.464356 1432 update_check_scheduler.cc:74] Next update check in 2m25s Jan 30 13:00:12.464475 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:00:12.476733 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:00:12.521029 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:00:12.521029 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:00:12.521029 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:00:12.530260 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:00:12.533459 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jan 30 13:00:12.540464 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:00:12.577104 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:00:12.580362 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:00:12.582362 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:00:12.587692 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:00:12.811386 containerd[1449]: time="2025-01-30T13:00:12.809657480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:00:12.845990 containerd[1449]: time="2025-01-30T13:00:12.845812920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:00:12.847794 containerd[1449]: time="2025-01-30T13:00:12.847728960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:00:12.847794 containerd[1449]: time="2025-01-30T13:00:12.847788240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:00:12.847940 containerd[1449]: time="2025-01-30T13:00:12.847808120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:00:12.848069 containerd[1449]: time="2025-01-30T13:00:12.848043360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:00:12.848123 containerd[1449]: time="2025-01-30T13:00:12.848071200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848261 containerd[1449]: time="2025-01-30T13:00:12.848139080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848261 containerd[1449]: time="2025-01-30T13:00:12.848156360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848453 containerd[1449]: time="2025-01-30T13:00:12.848424160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848490 containerd[1449]: time="2025-01-30T13:00:12.848452600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848490 containerd[1449]: time="2025-01-30T13:00:12.848471960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848490 containerd[1449]: time="2025-01-30T13:00:12.848482400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848596 containerd[1449]: time="2025-01-30T13:00:12.848578640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:00:12.848857 containerd[1449]: time="2025-01-30T13:00:12.848831880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:00:12.849001 containerd[1449]: time="2025-01-30T13:00:12.848978440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:00:12.849028 containerd[1449]: time="2025-01-30T13:00:12.849000320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:00:12.849336 containerd[1449]: time="2025-01-30T13:00:12.849089800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:00:12.849336 containerd[1449]: time="2025-01-30T13:00:12.849138440Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:00:12.852677 tar[1439]: linux-arm64/LICENSE Jan 30 13:00:12.852677 tar[1439]: linux-arm64/README.md Jan 30 13:00:12.858189 containerd[1449]: time="2025-01-30T13:00:12.858132960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:00:12.858331 containerd[1449]: time="2025-01-30T13:00:12.858234880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:00:12.858331 containerd[1449]: time="2025-01-30T13:00:12.858254040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:00:12.858331 containerd[1449]: time="2025-01-30T13:00:12.858325560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:00:12.858498 containerd[1449]: time="2025-01-30T13:00:12.858352800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:00:12.858602 containerd[1449]: time="2025-01-30T13:00:12.858579240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:00:12.859052 containerd[1449]: time="2025-01-30T13:00:12.859029800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:00:12.860931 containerd[1449]: time="2025-01-30T13:00:12.860868960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:00:12.860931 containerd[1449]: time="2025-01-30T13:00:12.860935680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.860952440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.860969400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.860985720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.861000040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.861018680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.861036240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.861049600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.861063200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861073 containerd[1449]: time="2025-01-30T13:00:12.861077400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861109200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861127840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861141720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861156480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861169720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861199720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861213240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861226960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861240680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861258160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861267 containerd[1449]: time="2025-01-30T13:00:12.861270480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861518 containerd[1449]: time="2025-01-30T13:00:12.861284600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861518 containerd[1449]: time="2025-01-30T13:00:12.861298440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861518 containerd[1449]: time="2025-01-30T13:00:12.861315360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:00:12.861518 containerd[1449]: time="2025-01-30T13:00:12.861343760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861518 containerd[1449]: time="2025-01-30T13:00:12.861357160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.861518 containerd[1449]: time="2025-01-30T13:00:12.861381720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:00:12.861626 containerd[1449]: time="2025-01-30T13:00:12.861615400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:00:12.862083 containerd[1449]: time="2025-01-30T13:00:12.861637240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:00:12.862083 containerd[1449]: time="2025-01-30T13:00:12.861868040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:00:12.862083 containerd[1449]: time="2025-01-30T13:00:12.861887720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:00:12.862083 containerd[1449]: time="2025-01-30T13:00:12.861898600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.862083 containerd[1449]: time="2025-01-30T13:00:12.861918840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:00:12.862083 containerd[1449]: time="2025-01-30T13:00:12.861930680Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:00:12.862083 containerd[1449]: time="2025-01-30T13:00:12.861943040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:00:12.862472 containerd[1449]: time="2025-01-30T13:00:12.862380400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:00:12.862472 containerd[1449]: time="2025-01-30T13:00:12.862455160Z" level=info msg="Connect containerd service" Jan 30 13:00:12.862626 containerd[1449]: time="2025-01-30T13:00:12.862495040Z" level=info msg="using legacy CRI server" Jan 30 13:00:12.862626 containerd[1449]: time="2025-01-30T13:00:12.862505480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:00:12.862626 containerd[1449]: time="2025-01-30T13:00:12.862603600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:00:12.863820 containerd[1449]: time="2025-01-30T13:00:12.863780760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:00:12.864062 containerd[1449]: time="2025-01-30T13:00:12.864012680Z" level=info msg="Start subscribing containerd event" Jan 30 13:00:12.864122 containerd[1449]: time="2025-01-30T13:00:12.864094800Z" level=info msg="Start recovering state" Jan 30 13:00:12.864214 containerd[1449]: time="2025-01-30T13:00:12.864196600Z" level=info msg="Start event monitor" Jan 30 13:00:12.864243 containerd[1449]: time="2025-01-30T13:00:12.864217440Z" level=info msg="Start snapshots syncer" Jan 30 13:00:12.864243 containerd[1449]: time="2025-01-30T13:00:12.864228480Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:00:12.864243 containerd[1449]: time="2025-01-30T13:00:12.864237000Z" level=info msg="Start streaming server" Jan 30 13:00:12.865142 containerd[1449]: time="2025-01-30T13:00:12.865076400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:00:12.865289 containerd[1449]: time="2025-01-30T13:00:12.865198360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:00:12.865289 containerd[1449]: time="2025-01-30T13:00:12.865261040Z" level=info msg="containerd successfully booted in 0.057082s" Jan 30 13:00:12.865682 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:00:12.869414 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:00:13.164821 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:00:13.187212 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:00:13.199711 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:00:13.206214 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:00:13.206455 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:00:13.209247 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:00:13.224498 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:00:13.227708 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:00:13.230237 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:00:13.231817 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:00:13.385898 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 30 13:00:13.388547 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:00:13.390628 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:00:13.402700 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:00:13.405741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:00:13.408398 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:00:13.427623 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:00:13.427865 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:00:13.429341 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:00:13.436125 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:00:13.958454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:13.959850 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:00:13.965558 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:00:13.965703 systemd[1]: Startup finished in 608ms (kernel) + 4.527s (initrd) + 3.732s (userspace) = 8.868s. Jan 30 13:00:14.566054 kubelet[1526]: E0130 13:00:14.565941 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:00:14.568795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:00:14.568953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:00:18.267023 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:00:18.268771 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:60166.service - OpenSSH per-connection server daemon (10.0.0.1:60166). Jan 30 13:00:18.349916 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 60166 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:18.352404 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:18.364710 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:00:18.375734 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:00:18.377925 systemd-logind[1427]: New session 1 of user core. Jan 30 13:00:18.389458 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:00:18.401794 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:00:18.404820 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:00:18.503591 systemd[1544]: Queued start job for default target default.target. Jan 30 13:00:18.511540 systemd[1544]: Created slice app.slice - User Application Slice. Jan 30 13:00:18.511593 systemd[1544]: Reached target paths.target - Paths. Jan 30 13:00:18.511606 systemd[1544]: Reached target timers.target - Timers. Jan 30 13:00:18.513124 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:00:18.526819 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:00:18.526907 systemd[1544]: Reached target sockets.target - Sockets. Jan 30 13:00:18.526921 systemd[1544]: Reached target basic.target - Basic System. Jan 30 13:00:18.526967 systemd[1544]: Reached target default.target - Main User Target. Jan 30 13:00:18.526995 systemd[1544]: Startup finished in 115ms. Jan 30 13:00:18.527079 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:00:18.528843 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:00:18.629059 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:60178.service - OpenSSH per-connection server daemon (10.0.0.1:60178). Jan 30 13:00:18.663994 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 60178 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:18.666272 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:18.674498 systemd-logind[1427]: New session 2 of user core. Jan 30 13:00:18.685612 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:00:18.742489 sshd[1555]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:18.752574 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:60178.service: Deactivated successfully. Jan 30 13:00:18.757325 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:00:18.762653 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:00:18.772987 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:60192.service - OpenSSH per-connection server daemon (10.0.0.1:60192). Jan 30 13:00:18.774197 systemd-logind[1427]: Removed session 2. Jan 30 13:00:18.814859 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 60192 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:18.816501 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:18.821541 systemd-logind[1427]: New session 3 of user core. Jan 30 13:00:18.831641 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:00:18.884223 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:18.896927 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:60192.service: Deactivated successfully. Jan 30 13:00:18.901237 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:00:18.906277 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:00:18.926463 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:60208.service - OpenSSH per-connection server daemon (10.0.0.1:60208). Jan 30 13:00:18.927712 systemd-logind[1427]: Removed session 3. Jan 30 13:00:18.970228 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 60208 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:18.972388 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:18.977598 systemd-logind[1427]: New session 4 of user core. Jan 30 13:00:18.988649 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:00:19.047761 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:19.060367 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:60208.service: Deactivated successfully. Jan 30 13:00:19.063503 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:00:19.065339 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:00:19.072806 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:60222.service - OpenSSH per-connection server daemon (10.0.0.1:60222). Jan 30 13:00:19.073770 systemd-logind[1427]: Removed session 4. Jan 30 13:00:19.109618 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 60222 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:19.111283 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:19.116431 systemd-logind[1427]: New session 5 of user core. Jan 30 13:00:19.124602 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:00:19.208611 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:00:19.209445 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:00:19.223697 sudo[1579]: pam_unix(sudo:session): session closed for user root Jan 30 13:00:19.228298 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:19.239178 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:60222.service: Deactivated successfully. Jan 30 13:00:19.241471 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:00:19.242917 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:00:19.257814 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:60226.service - OpenSSH per-connection server daemon (10.0.0.1:60226). Jan 30 13:00:19.258763 systemd-logind[1427]: Removed session 5. Jan 30 13:00:19.292735 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 60226 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:19.294421 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:19.300034 systemd-logind[1427]: New session 6 of user core. Jan 30 13:00:19.311363 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:00:19.370813 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:00:19.371131 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:00:19.376124 sudo[1588]: pam_unix(sudo:session): session closed for user root Jan 30 13:00:19.387001 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:00:19.387323 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:00:19.412807 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:00:19.414928 auditctl[1591]: No rules Jan 30 13:00:19.416284 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:00:19.416713 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:00:19.420769 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:00:19.456279 augenrules[1609]: No rules Jan 30 13:00:19.458107 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:00:19.459564 sudo[1587]: pam_unix(sudo:session): session closed for user root Jan 30 13:00:19.462654 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:19.475195 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:60226.service: Deactivated successfully. Jan 30 13:00:19.478217 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:00:19.482941 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:00:19.493926 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:60234.service - OpenSSH per-connection server daemon (10.0.0.1:60234). Jan 30 13:00:19.495464 systemd-logind[1427]: Removed session 6. Jan 30 13:00:19.530305 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 60234 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:19.533396 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:19.539762 systemd-logind[1427]: New session 7 of user core. Jan 30 13:00:19.551651 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:00:19.613851 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:00:19.614151 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:00:20.227701 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:00:20.227864 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:00:20.507995 dockerd[1638]: time="2025-01-30T13:00:20.507822079Z" level=info msg="Starting up" Jan 30 13:00:20.663154 dockerd[1638]: time="2025-01-30T13:00:20.663103114Z" level=info msg="Loading containers: start." Jan 30 13:00:20.778425 kernel: Initializing XFRM netlink socket Jan 30 13:00:20.854692 systemd-networkd[1376]: docker0: Link UP Jan 30 13:00:20.880187 dockerd[1638]: time="2025-01-30T13:00:20.880124769Z" level=info msg="Loading containers: done." Jan 30 13:00:20.903715 dockerd[1638]: time="2025-01-30T13:00:20.903644880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:00:20.903879 dockerd[1638]: time="2025-01-30T13:00:20.903769711Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:00:20.903908 dockerd[1638]: time="2025-01-30T13:00:20.903891078Z" level=info msg="Daemon has completed initialization" Jan 30 13:00:20.953351 dockerd[1638]: time="2025-01-30T13:00:20.952830335Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:00:20.953747 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:00:21.831190 containerd[1449]: time="2025-01-30T13:00:21.831003290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:00:22.610549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279847230.mount: Deactivated successfully. Jan 30 13:00:23.450503 containerd[1449]: time="2025-01-30T13:00:23.450353466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:23.451396 containerd[1449]: time="2025-01-30T13:00:23.451142711Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 30 13:00:23.452251 containerd[1449]: time="2025-01-30T13:00:23.452218195Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:23.455579 containerd[1449]: time="2025-01-30T13:00:23.455508974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:23.457242 containerd[1449]: time="2025-01-30T13:00:23.457195989Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.62614085s" Jan 30 13:00:23.457299 containerd[1449]: time="2025-01-30T13:00:23.457244767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 13:00:23.478383 containerd[1449]: time="2025-01-30T13:00:23.478333521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:00:24.757614 containerd[1449]: time="2025-01-30T13:00:24.757556230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:24.758423 containerd[1449]: time="2025-01-30T13:00:24.758394187Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 30 13:00:24.759145 containerd[1449]: time="2025-01-30T13:00:24.759117214Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:24.762473 containerd[1449]: time="2025-01-30T13:00:24.762436153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:24.763795 containerd[1449]: time="2025-01-30T13:00:24.763697967Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.285306875s" Jan 30 13:00:24.763795 containerd[1449]: time="2025-01-30T13:00:24.763742060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 13:00:24.784876 containerd[1449]: time="2025-01-30T13:00:24.784806712Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:00:24.819215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:00:24.836618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:00:24.933692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:24.938739 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:00:24.997060 kubelet[1876]: E0130 13:00:24.997004 1876 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:00:25.000410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:00:25.000557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:00:25.811881 containerd[1449]: time="2025-01-30T13:00:25.811829834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:25.812705 containerd[1449]: time="2025-01-30T13:00:25.812674565Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 30 13:00:25.813796 containerd[1449]: time="2025-01-30T13:00:25.813738006Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:25.817196 containerd[1449]: time="2025-01-30T13:00:25.817143022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:25.819290 containerd[1449]: time="2025-01-30T13:00:25.818745348Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.033894789s" Jan 30 13:00:25.819290 containerd[1449]: time="2025-01-30T13:00:25.818785686Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 13:00:25.839230 containerd[1449]: time="2025-01-30T13:00:25.839178938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:00:26.777589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867763802.mount: Deactivated successfully. Jan 30 13:00:27.205924 containerd[1449]: time="2025-01-30T13:00:27.205769046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:27.206525 containerd[1449]: time="2025-01-30T13:00:27.206489375Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 13:00:27.207519 containerd[1449]: time="2025-01-30T13:00:27.207479653Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:27.210353 containerd[1449]: time="2025-01-30T13:00:27.210278434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:27.211048 containerd[1449]: time="2025-01-30T13:00:27.210986452Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.371751535s" Jan 30 13:00:27.211048 containerd[1449]: time="2025-01-30T13:00:27.211033495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 13:00:27.234098 containerd[1449]: time="2025-01-30T13:00:27.234057248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:00:27.887625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545705267.mount: Deactivated successfully. Jan 30 13:00:28.698486 containerd[1449]: time="2025-01-30T13:00:28.698432544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:28.699401 containerd[1449]: time="2025-01-30T13:00:28.699043226Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 13:00:28.701384 containerd[1449]: time="2025-01-30T13:00:28.701305379Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:28.704628 containerd[1449]: time="2025-01-30T13:00:28.704569514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:28.706026 containerd[1449]: time="2025-01-30T13:00:28.705992701Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.471886369s" Jan 30 13:00:28.706071 containerd[1449]: time="2025-01-30T13:00:28.706032593Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:00:28.729383 containerd[1449]: time="2025-01-30T13:00:28.729320942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:00:29.170126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555220317.mount: Deactivated successfully. Jan 30 13:00:29.180731 containerd[1449]: time="2025-01-30T13:00:29.180682210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:29.182809 containerd[1449]: time="2025-01-30T13:00:29.182757420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 30 13:00:29.183456 containerd[1449]: time="2025-01-30T13:00:29.183421595Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:29.185783 containerd[1449]: time="2025-01-30T13:00:29.185750756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:29.187465 containerd[1449]: time="2025-01-30T13:00:29.186970086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 457.607691ms" Jan 30 13:00:29.187465 containerd[1449]: time="2025-01-30T13:00:29.187009806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 13:00:29.207404 containerd[1449]: time="2025-01-30T13:00:29.207349198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:00:29.746082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157980558.mount: Deactivated successfully. Jan 30 13:00:30.973216 containerd[1449]: time="2025-01-30T13:00:30.973167412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:30.974131 containerd[1449]: time="2025-01-30T13:00:30.973702192Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 30 13:00:30.975120 containerd[1449]: time="2025-01-30T13:00:30.975083101Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:30.979359 containerd[1449]: time="2025-01-30T13:00:30.979322758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:00:30.980270 containerd[1449]: time="2025-01-30T13:00:30.980207313Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.772678597s" Jan 30 13:00:30.980270 containerd[1449]: time="2025-01-30T13:00:30.980242816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 13:00:35.250912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:00:35.261802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:00:35.377968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:35.380283 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:00:35.423038 kubelet[2097]: E0130 13:00:35.422978 2097 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:00:35.425678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:00:35.425813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:00:35.896687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:35.907654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:00:35.924985 systemd[1]: Reloading requested from client PID 2113 ('systemctl') (unit session-7.scope)... Jan 30 13:00:35.925002 systemd[1]: Reloading... Jan 30 13:00:36.022874 zram_generator::config[2152]: No configuration found. Jan 30 13:00:36.254782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:00:36.315020 systemd[1]: Reloading finished in 389 ms. Jan 30 13:00:36.371672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:00:36.371785 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:00:36.372081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:36.381880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:00:36.479035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:36.482963 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:00:36.525925 kubelet[2197]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:00:36.525925 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:00:36.525925 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:00:36.526923 kubelet[2197]: I0130 13:00:36.526871 2197 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:00:37.232385 kubelet[2197]: I0130 13:00:37.232344 2197 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:00:37.232385 kubelet[2197]: I0130 13:00:37.232386 2197 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:00:37.232607 kubelet[2197]: I0130 13:00:37.232590 2197 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:00:37.259486 kubelet[2197]: E0130 13:00:37.259446 2197 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.259662 kubelet[2197]: I0130 13:00:37.259641 2197 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:00:37.272946 kubelet[2197]: I0130 13:00:37.272915 2197 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:00:37.274125 kubelet[2197]: I0130 13:00:37.274065 2197 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:00:37.274410 kubelet[2197]: I0130 13:00:37.274204 2197 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:00:37.274584 kubelet[2197]: I0130 13:00:37.274467 2197 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:00:37.274584 kubelet[2197]: I0130 13:00:37.274477 2197 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:00:37.274745 kubelet[2197]: I0130 13:00:37.274721 2197 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:00:37.275581 kubelet[2197]: I0130 13:00:37.275562 2197 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:00:37.275617 kubelet[2197]: I0130 13:00:37.275588 2197 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:00:37.276728 kubelet[2197]: I0130 13:00:37.275844 2197 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:00:37.276728 kubelet[2197]: I0130 13:00:37.275984 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:00:37.276728 kubelet[2197]: W0130 13:00:37.276525 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.276728 kubelet[2197]: E0130 13:00:37.276575 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.277031 kubelet[2197]: W0130 13:00:37.276964 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.277031 kubelet[2197]: E0130 13:00:37.277015 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.277677 kubelet[2197]: I0130 13:00:37.277393 2197 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:00:37.277765 kubelet[2197]: I0130 13:00:37.277744 2197 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:00:37.277868 kubelet[2197]: W0130 13:00:37.277848 2197 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:00:37.278953 kubelet[2197]: I0130 13:00:37.278750 2197 server.go:1264] "Started kubelet" Jan 30 13:00:37.279430 kubelet[2197]: I0130 13:00:37.279391 2197 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:00:37.281845 kubelet[2197]: I0130 13:00:37.280460 2197 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:00:37.281845 kubelet[2197]: I0130 13:00:37.281501 2197 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:00:37.281845 kubelet[2197]: I0130 13:00:37.281754 2197 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:00:37.281845 kubelet[2197]: E0130 13:00:37.281622 2197 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f79ec5da656ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:00:37.278725806 +0000 UTC m=+0.792619635,LastTimestamp:2025-01-30 13:00:37.278725806 +0000 UTC m=+0.792619635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:00:37.283091 kubelet[2197]: I0130 13:00:37.282439 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:00:37.283091 kubelet[2197]: E0130 13:00:37.282649 2197 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:00:37.283091 kubelet[2197]: I0130 13:00:37.282703 2197 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:00:37.283091 kubelet[2197]: I0130 13:00:37.283009 2197 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:00:37.283091 kubelet[2197]: I0130 13:00:37.283060 2197 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:00:37.284039 kubelet[2197]: W0130 13:00:37.283995 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.284039 kubelet[2197]: E0130 13:00:37.284043 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.284148 kubelet[2197]: E0130 13:00:37.284091 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jan 30 13:00:37.284923 kubelet[2197]: I0130 13:00:37.284903 2197 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:00:37.285093 kubelet[2197]: I0130 13:00:37.285071 2197 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:00:37.285751 kubelet[2197]: E0130 13:00:37.285733 2197 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:00:37.286282 kubelet[2197]: I0130 13:00:37.286264 2197 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:00:37.299275 kubelet[2197]: I0130 13:00:37.299024 2197 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:00:37.299275 kubelet[2197]: I0130 13:00:37.299042 2197 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:00:37.299275 kubelet[2197]: I0130 13:00:37.299060 2197 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:00:37.300593 kubelet[2197]: I0130 13:00:37.299738 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:00:37.300825 kubelet[2197]: I0130 13:00:37.300788 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:00:37.300971 kubelet[2197]: I0130 13:00:37.300952 2197 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:00:37.301003 kubelet[2197]: I0130 13:00:37.300979 2197 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:00:37.301040 kubelet[2197]: E0130 13:00:37.301017 2197 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:00:37.301489 kubelet[2197]: W0130 13:00:37.301432 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.301489 kubelet[2197]: E0130 13:00:37.301467 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:37.384314 kubelet[2197]: I0130 13:00:37.384277 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:00:37.384673 kubelet[2197]: E0130 13:00:37.384632 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 30 13:00:37.401829 kubelet[2197]: E0130 13:00:37.401791 2197 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:00:37.485605 kubelet[2197]: E0130 13:00:37.485459 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jan 30 13:00:37.501232 kubelet[2197]: I0130 13:00:37.501186 2197 policy_none.go:49] "None policy: Start" Jan 30 13:00:37.501888 kubelet[2197]: I0130 13:00:37.501830 2197 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:00:37.501888 kubelet[2197]: I0130 13:00:37.501856 2197 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:00:37.512966 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:00:37.523286 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:00:37.525994 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:00:37.533243 kubelet[2197]: I0130 13:00:37.533171 2197 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:00:37.533588 kubelet[2197]: I0130 13:00:37.533424 2197 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:00:37.533588 kubelet[2197]: I0130 13:00:37.533542 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:00:37.536047 kubelet[2197]: E0130 13:00:37.536008 2197 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:00:37.588736 kubelet[2197]: I0130 13:00:37.588702 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:00:37.589246 kubelet[2197]: E0130 13:00:37.589217 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 30 13:00:37.603144 kubelet[2197]: I0130 13:00:37.602868 2197 topology_manager.go:215] "Topology Admit Handler" podUID="521b1e6bb836edd8edd29c1906cb143e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:00:37.604260 kubelet[2197]: I0130 13:00:37.604131 2197 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:00:37.605149 kubelet[2197]: I0130 13:00:37.605059 2197 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:00:37.611263 systemd[1]: Created slice kubepods-burstable-pod521b1e6bb836edd8edd29c1906cb143e.slice - libcontainer container kubepods-burstable-pod521b1e6bb836edd8edd29c1906cb143e.slice. Jan 30 13:00:37.623043 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 30 13:00:37.626752 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 30 13:00:37.688713 kubelet[2197]: I0130 13:00:37.688670 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/521b1e6bb836edd8edd29c1906cb143e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"521b1e6bb836edd8edd29c1906cb143e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:00:37.688713 kubelet[2197]: I0130 13:00:37.688714 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/521b1e6bb836edd8edd29c1906cb143e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"521b1e6bb836edd8edd29c1906cb143e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:00:37.688713 kubelet[2197]: I0130 13:00:37.688734 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:37.688713 kubelet[2197]: I0130 13:00:37.688751 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:37.688713 kubelet[2197]: I0130 13:00:37.688776 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:37.689010 kubelet[2197]: I0130 13:00:37.688792 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/521b1e6bb836edd8edd29c1906cb143e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"521b1e6bb836edd8edd29c1906cb143e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:00:37.689010 kubelet[2197]: I0130 13:00:37.688807 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:37.689010 kubelet[2197]: I0130 13:00:37.688823 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:37.689010 kubelet[2197]: I0130 13:00:37.688838 2197 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:00:37.887118 kubelet[2197]: E0130 13:00:37.886340 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jan 30 13:00:37.921464 kubelet[2197]: E0130 13:00:37.921419 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:37.922163 containerd[1449]: time="2025-01-30T13:00:37.922099397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:521b1e6bb836edd8edd29c1906cb143e,Namespace:kube-system,Attempt:0,}" Jan 30 13:00:37.924944 kubelet[2197]: E0130 13:00:37.924920 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:37.925460 containerd[1449]: time="2025-01-30T13:00:37.925296447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 13:00:37.929589 kubelet[2197]: E0130 13:00:37.929564 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:37.929911 containerd[1449]: time="2025-01-30T13:00:37.929884099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 13:00:37.990703 kubelet[2197]: I0130 13:00:37.990615 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:00:37.991571 kubelet[2197]: E0130 13:00:37.991502 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 30 13:00:38.342338 kubelet[2197]: W0130 13:00:38.342192 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.342338 kubelet[2197]: E0130 13:00:38.342258 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.467175 kubelet[2197]: W0130 13:00:38.467086 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.467175 kubelet[2197]: E0130 13:00:38.467157 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.519564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1279562996.mount: Deactivated successfully. Jan 30 13:00:38.524488 containerd[1449]: time="2025-01-30T13:00:38.524394331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:00:38.526226 containerd[1449]: time="2025-01-30T13:00:38.526190258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:00:38.526991 containerd[1449]: time="2025-01-30T13:00:38.526937029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:00:38.527843 containerd[1449]: time="2025-01-30T13:00:38.527803954Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:00:38.528757 containerd[1449]: time="2025-01-30T13:00:38.528719748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:00:38.529611 containerd[1449]: time="2025-01-30T13:00:38.529522714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:00:38.529700 containerd[1449]: time="2025-01-30T13:00:38.529633301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:00:38.531809 containerd[1449]: time="2025-01-30T13:00:38.531725647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:00:38.536108 containerd[1449]: time="2025-01-30T13:00:38.536058348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.691173ms" Jan 30 13:00:38.537755 containerd[1449]: time="2025-01-30T13:00:38.537536603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 615.357471ms" Jan 30 13:00:38.539388 containerd[1449]: time="2025-01-30T13:00:38.539228986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.286848ms" Jan 30 13:00:38.653917 kubelet[2197]: W0130 13:00:38.653776 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.654339 kubelet[2197]: E0130 13:00:38.654310 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.687677 kubelet[2197]: E0130 13:00:38.687631 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Jan 30 13:00:38.695466 containerd[1449]: time="2025-01-30T13:00:38.694956526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:00:38.695466 containerd[1449]: time="2025-01-30T13:00:38.695042699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:00:38.695466 containerd[1449]: time="2025-01-30T13:00:38.695390629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:00:38.695466 containerd[1449]: time="2025-01-30T13:00:38.695436337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:00:38.695712 containerd[1449]: time="2025-01-30T13:00:38.695063711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:38.695712 containerd[1449]: time="2025-01-30T13:00:38.695225889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:38.696033 containerd[1449]: time="2025-01-30T13:00:38.695452587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:38.696923 containerd[1449]: time="2025-01-30T13:00:38.696805685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:38.699648 containerd[1449]: time="2025-01-30T13:00:38.697067684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:00:38.699648 containerd[1449]: time="2025-01-30T13:00:38.698454043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:00:38.699648 containerd[1449]: time="2025-01-30T13:00:38.698465569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:38.699648 containerd[1449]: time="2025-01-30T13:00:38.698543056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:38.700502 kubelet[2197]: W0130 13:00:38.700474 2197 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.700627 kubelet[2197]: E0130 13:00:38.700513 2197 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 30 13:00:38.717589 systemd[1]: Started cri-containerd-ea7faf6b9359ba70ba637c14c8a4b136ebbf41180d814779de645268bf863b03.scope - libcontainer container ea7faf6b9359ba70ba637c14c8a4b136ebbf41180d814779de645268bf863b03. Jan 30 13:00:38.721547 systemd[1]: Started cri-containerd-6c1a27153c2e3f193e7b0b977482d1f40fdd4e69cbeb35cecbd2a7ca7137ee42.scope - libcontainer container 6c1a27153c2e3f193e7b0b977482d1f40fdd4e69cbeb35cecbd2a7ca7137ee42. Jan 30 13:00:38.723415 systemd[1]: Started cri-containerd-97e8fb04434755d7e23b30b41c1fc25f24654206877b6d0281df7aba5b56745d.scope - libcontainer container 97e8fb04434755d7e23b30b41c1fc25f24654206877b6d0281df7aba5b56745d. Jan 30 13:00:38.756361 containerd[1449]: time="2025-01-30T13:00:38.756308206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea7faf6b9359ba70ba637c14c8a4b136ebbf41180d814779de645268bf863b03\"" Jan 30 13:00:38.757601 kubelet[2197]: E0130 13:00:38.757577 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:38.762201 containerd[1449]: time="2025-01-30T13:00:38.761610534Z" level=info msg="CreateContainer within sandbox \"ea7faf6b9359ba70ba637c14c8a4b136ebbf41180d814779de645268bf863b03\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:00:38.762201 containerd[1449]: time="2025-01-30T13:00:38.762175516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1a27153c2e3f193e7b0b977482d1f40fdd4e69cbeb35cecbd2a7ca7137ee42\"" Jan 30 13:00:38.762999 kubelet[2197]: E0130 13:00:38.762979 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:38.767046 containerd[1449]: time="2025-01-30T13:00:38.766595350Z" level=info msg="CreateContainer within sandbox \"6c1a27153c2e3f193e7b0b977482d1f40fdd4e69cbeb35cecbd2a7ca7137ee42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:00:38.767493 containerd[1449]: time="2025-01-30T13:00:38.767364295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:521b1e6bb836edd8edd29c1906cb143e,Namespace:kube-system,Attempt:0,} returns sandbox id \"97e8fb04434755d7e23b30b41c1fc25f24654206877b6d0281df7aba5b56745d\"" Jan 30 13:00:38.769449 kubelet[2197]: E0130 13:00:38.769309 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:38.771826 containerd[1449]: time="2025-01-30T13:00:38.771756153Z" level=info msg="CreateContainer within sandbox \"97e8fb04434755d7e23b30b41c1fc25f24654206877b6d0281df7aba5b56745d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:00:38.782548 containerd[1449]: time="2025-01-30T13:00:38.782504656Z" level=info msg="CreateContainer within sandbox \"ea7faf6b9359ba70ba637c14c8a4b136ebbf41180d814779de645268bf863b03\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c531fbf0d8a417d9d999d1ca2d6b251e6d8319a128f92656600a9ffc9174382f\"" Jan 30 13:00:38.783747 containerd[1449]: time="2025-01-30T13:00:38.783421050Z" level=info msg="StartContainer for \"c531fbf0d8a417d9d999d1ca2d6b251e6d8319a128f92656600a9ffc9174382f\"" Jan 30 13:00:38.788632 containerd[1449]: time="2025-01-30T13:00:38.788489477Z" level=info msg="CreateContainer within sandbox \"6c1a27153c2e3f193e7b0b977482d1f40fdd4e69cbeb35cecbd2a7ca7137ee42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a032d00a68e7f64746eb5785478ad6b8891616600a18b1488450917615db5c2d\"" Jan 30 13:00:38.789091 containerd[1449]: time="2025-01-30T13:00:38.789048255Z" level=info msg="StartContainer for \"a032d00a68e7f64746eb5785478ad6b8891616600a18b1488450917615db5c2d\"" Jan 30 13:00:38.791441 containerd[1449]: time="2025-01-30T13:00:38.790398152Z" level=info msg="CreateContainer within sandbox \"97e8fb04434755d7e23b30b41c1fc25f24654206877b6d0281df7aba5b56745d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a5e18aa533b2eb749bd8c12e58a5f0cc5de0483f571a4d0384b1461473d4a6cd\"" Jan 30 13:00:38.791441 containerd[1449]: time="2025-01-30T13:00:38.790786867Z" level=info msg="StartContainer for \"a5e18aa533b2eb749bd8c12e58a5f0cc5de0483f571a4d0384b1461473d4a6cd\"" Jan 30 13:00:38.794744 kubelet[2197]: I0130 13:00:38.794615 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:00:38.795048 kubelet[2197]: E0130 13:00:38.794991 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 30 13:00:38.811595 systemd[1]: Started cri-containerd-c531fbf0d8a417d9d999d1ca2d6b251e6d8319a128f92656600a9ffc9174382f.scope - libcontainer container c531fbf0d8a417d9d999d1ca2d6b251e6d8319a128f92656600a9ffc9174382f. Jan 30 13:00:38.815056 systemd[1]: Started cri-containerd-a032d00a68e7f64746eb5785478ad6b8891616600a18b1488450917615db5c2d.scope - libcontainer container a032d00a68e7f64746eb5785478ad6b8891616600a18b1488450917615db5c2d. Jan 30 13:00:38.818495 systemd[1]: Started cri-containerd-a5e18aa533b2eb749bd8c12e58a5f0cc5de0483f571a4d0384b1461473d4a6cd.scope - libcontainer container a5e18aa533b2eb749bd8c12e58a5f0cc5de0483f571a4d0384b1461473d4a6cd. Jan 30 13:00:38.868210 containerd[1449]: time="2025-01-30T13:00:38.868138187Z" level=info msg="StartContainer for \"c531fbf0d8a417d9d999d1ca2d6b251e6d8319a128f92656600a9ffc9174382f\" returns successfully" Jan 30 13:00:38.868338 containerd[1449]: time="2025-01-30T13:00:38.868281033Z" level=info msg="StartContainer for \"a032d00a68e7f64746eb5785478ad6b8891616600a18b1488450917615db5c2d\" returns successfully" Jan 30 13:00:38.897801 containerd[1449]: time="2025-01-30T13:00:38.897698231Z" level=info msg="StartContainer for \"a5e18aa533b2eb749bd8c12e58a5f0cc5de0483f571a4d0384b1461473d4a6cd\" returns successfully" Jan 30 13:00:39.308520 kubelet[2197]: E0130 13:00:39.308291 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:39.310112 kubelet[2197]: E0130 13:00:39.310084 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:39.312393 kubelet[2197]: E0130 13:00:39.312323 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:40.313708 kubelet[2197]: E0130 13:00:40.313674 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:40.397046 kubelet[2197]: I0130 13:00:40.396492 2197 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:00:40.748328 kubelet[2197]: E0130 13:00:40.748202 2197 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:00:40.847840 kubelet[2197]: I0130 13:00:40.847627 2197 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:00:40.855588 kubelet[2197]: E0130 13:00:40.855549 2197 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:00:40.956477 kubelet[2197]: E0130 13:00:40.956401 2197 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:00:41.057004 kubelet[2197]: E0130 13:00:41.056890 2197 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:00:41.157600 kubelet[2197]: E0130 13:00:41.157550 2197 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:00:41.277956 kubelet[2197]: I0130 13:00:41.277919 2197 apiserver.go:52] "Watching apiserver" Jan 30 13:00:41.283668 kubelet[2197]: I0130 13:00:41.283626 2197 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:00:42.934354 kubelet[2197]: E0130 13:00:42.934060 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:43.319195 kubelet[2197]: E0130 13:00:43.317768 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:43.451004 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-7.scope)... Jan 30 13:00:43.451020 systemd[1]: Reloading... Jan 30 13:00:43.517453 zram_generator::config[2513]: No configuration found. Jan 30 13:00:43.687244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:00:43.763030 systemd[1]: Reloading finished in 311 ms. Jan 30 13:00:43.801182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:00:43.812608 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:00:43.812852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:43.812918 systemd[1]: kubelet.service: Consumed 1.165s CPU time, 118.3M memory peak, 0B memory swap peak. Jan 30 13:00:43.825953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:00:43.943241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:00:43.948683 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:00:43.991550 kubelet[2555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:00:43.991550 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:00:43.991550 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:00:43.991951 kubelet[2555]: I0130 13:00:43.991580 2555 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:00:44.057684 kubelet[2555]: I0130 13:00:44.057631 2555 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:00:44.057684 kubelet[2555]: I0130 13:00:44.057668 2555 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:00:44.057902 kubelet[2555]: I0130 13:00:44.057874 2555 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:00:44.059965 kubelet[2555]: I0130 13:00:44.059312 2555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:00:44.060581 kubelet[2555]: I0130 13:00:44.060547 2555 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:00:44.071672 kubelet[2555]: I0130 13:00:44.071636 2555 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:00:44.072277 kubelet[2555]: I0130 13:00:44.071839 2555 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:00:44.072277 kubelet[2555]: I0130 13:00:44.071868 2555 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:00:44.072277 kubelet[2555]: I0130 13:00:44.072049 2555 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:00:44.072277 kubelet[2555]: I0130 13:00:44.072059 2555 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:00:44.072277 kubelet[2555]: I0130 13:00:44.072091 2555 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:00:44.072503 kubelet[2555]: I0130 13:00:44.072193 2555 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:00:44.072503 kubelet[2555]: I0130 13:00:44.072207 2555 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:00:44.072503 kubelet[2555]: I0130 13:00:44.072234 2555 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:00:44.072503 kubelet[2555]: I0130 13:00:44.072250 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:00:44.073483 kubelet[2555]: I0130 13:00:44.073456 2555 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:00:44.073892 kubelet[2555]: I0130 13:00:44.073639 2555 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:00:44.074059 kubelet[2555]: I0130 13:00:44.074029 2555 server.go:1264] "Started kubelet" Jan 30 13:00:44.075902 kubelet[2555]: I0130 13:00:44.075878 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:00:44.078643 kubelet[2555]: I0130 13:00:44.076923 2555 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:00:44.078643 kubelet[2555]: I0130 13:00:44.077486 2555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:00:44.078643 kubelet[2555]: I0130 13:00:44.077749 2555 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:00:44.078643 kubelet[2555]: I0130 13:00:44.077844 2555 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:00:44.078643 kubelet[2555]: I0130 13:00:44.077962 2555 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:00:44.079148 kubelet[2555]: I0130 13:00:44.079115 2555 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:00:44.079281 kubelet[2555]: I0130 13:00:44.079259 2555 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:00:44.080033 kubelet[2555]: I0130 13:00:44.079998 2555 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:00:44.080134 kubelet[2555]: I0130 13:00:44.080110 2555 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:00:44.080432 kubelet[2555]: E0130 13:00:44.080402 2555 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:00:44.085311 kubelet[2555]: I0130 13:00:44.085256 2555 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:00:44.099262 kubelet[2555]: I0130 13:00:44.097156 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:00:44.099262 kubelet[2555]: I0130 13:00:44.098311 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:00:44.099262 kubelet[2555]: I0130 13:00:44.098364 2555 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:00:44.099262 kubelet[2555]: I0130 13:00:44.098410 2555 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:00:44.099262 kubelet[2555]: E0130 13:00:44.098460 2555 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:00:44.154279 kubelet[2555]: I0130 13:00:44.154023 2555 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:00:44.154279 kubelet[2555]: I0130 13:00:44.154043 2555 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:00:44.154279 kubelet[2555]: I0130 13:00:44.154062 2555 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:00:44.154279 kubelet[2555]: I0130 13:00:44.154221 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:00:44.154279 kubelet[2555]: I0130 13:00:44.154231 2555 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:00:44.154279 kubelet[2555]: I0130 13:00:44.154251 2555 policy_none.go:49] "None policy: Start" Jan 30 13:00:44.156037 kubelet[2555]: I0130 13:00:44.156013 2555 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:00:44.156037 kubelet[2555]: I0130 13:00:44.156039 2555 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:00:44.156436 kubelet[2555]: I0130 13:00:44.156208 2555 state_mem.go:75] "Updated machine memory state" Jan 30 13:00:44.163361 kubelet[2555]: I0130 13:00:44.163260 2555 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:00:44.163801 kubelet[2555]: I0130 13:00:44.163551 2555 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:00:44.164069 kubelet[2555]: I0130 13:00:44.164044 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:00:44.184109 kubelet[2555]: I0130 13:00:44.184074 2555 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:00:44.192150 kubelet[2555]: I0130 13:00:44.192104 2555 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 13:00:44.192287 kubelet[2555]: I0130 13:00:44.192199 2555 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:00:44.199076 kubelet[2555]: I0130 13:00:44.198929 2555 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:00:44.199076 kubelet[2555]: I0130 13:00:44.199066 2555 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:00:44.199211 kubelet[2555]: I0130 13:00:44.199112 2555 topology_manager.go:215] "Topology Admit Handler" podUID="521b1e6bb836edd8edd29c1906cb143e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:00:44.214169 kubelet[2555]: E0130 13:00:44.214113 2555 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:44.280531 kubelet[2555]: I0130 13:00:44.280476 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:44.280531 kubelet[2555]: I0130 13:00:44.280515 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:44.280531 kubelet[2555]: I0130 13:00:44.280540 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:44.280768 kubelet[2555]: I0130 13:00:44.280556 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:00:44.280768 kubelet[2555]: I0130 13:00:44.280573 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:44.280768 kubelet[2555]: I0130 13:00:44.280600 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:44.280768 kubelet[2555]: I0130 13:00:44.280619 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/521b1e6bb836edd8edd29c1906cb143e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"521b1e6bb836edd8edd29c1906cb143e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:00:44.280768 kubelet[2555]: I0130 13:00:44.280638 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/521b1e6bb836edd8edd29c1906cb143e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"521b1e6bb836edd8edd29c1906cb143e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:00:44.280884 kubelet[2555]: I0130 13:00:44.280654 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/521b1e6bb836edd8edd29c1906cb143e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"521b1e6bb836edd8edd29c1906cb143e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:00:44.515975 kubelet[2555]: E0130 13:00:44.515416 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:44.515975 kubelet[2555]: E0130 13:00:44.515838 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:44.520241 kubelet[2555]: E0130 13:00:44.520191 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:45.073382 kubelet[2555]: I0130 13:00:45.073233 2555 apiserver.go:52] "Watching apiserver" Jan 30 13:00:45.079491 kubelet[2555]: I0130 13:00:45.079445 2555 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:00:45.130879 kubelet[2555]: E0130 13:00:45.130753 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:45.143310 kubelet[2555]: E0130 13:00:45.143217 2555 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:00:45.144428 kubelet[2555]: E0130 13:00:45.144383 2555 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:00:45.144780 kubelet[2555]: E0130 13:00:45.144751 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:45.145328 kubelet[2555]: E0130 13:00:45.145255 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:45.182474 kubelet[2555]: I0130 13:00:45.182249 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.182228544 podStartE2EDuration="1.182228544s" podCreationTimestamp="2025-01-30 13:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:00:45.168466842 +0000 UTC m=+1.216507108" watchObservedRunningTime="2025-01-30 13:00:45.182228544 +0000 UTC m=+1.230268810" Jan 30 13:00:45.195514 kubelet[2555]: I0130 13:00:45.195322 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.1953026270000002 podStartE2EDuration="3.195302627s" podCreationTimestamp="2025-01-30 13:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:00:45.182381495 +0000 UTC m=+1.230421761" watchObservedRunningTime="2025-01-30 13:00:45.195302627 +0000 UTC m=+1.243342893" Jan 30 13:00:45.216796 kubelet[2555]: I0130 13:00:45.213155 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.213131911 podStartE2EDuration="1.213131911s" podCreationTimestamp="2025-01-30 13:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:00:45.195490145 +0000 UTC m=+1.243530411" watchObservedRunningTime="2025-01-30 13:00:45.213131911 +0000 UTC m=+1.261172137" Jan 30 13:00:46.132248 kubelet[2555]: E0130 13:00:46.132134 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:46.132248 kubelet[2555]: E0130 13:00:46.132185 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:47.133446 kubelet[2555]: E0130 13:00:47.133410 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:48.059855 kubelet[2555]: E0130 13:00:48.059666 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:48.135905 kubelet[2555]: E0130 13:00:48.135852 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:49.077756 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 30 13:00:49.087662 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:49.093188 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:60234.service: Deactivated successfully. Jan 30 13:00:49.096889 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:00:49.097132 systemd[1]: session-7.scope: Consumed 7.571s CPU time, 188.0M memory peak, 0B memory swap peak. Jan 30 13:00:49.098962 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:00:49.100490 systemd-logind[1427]: Removed session 7. Jan 30 13:00:51.123432 kubelet[2555]: E0130 13:00:51.123038 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:51.140758 kubelet[2555]: E0130 13:00:51.140714 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:57.064115 kubelet[2555]: I0130 13:00:57.064061 2555 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:00:57.073399 kubelet[2555]: E0130 13:00:57.072762 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:57.084911 containerd[1449]: time="2025-01-30T13:00:57.084624079Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:00:57.085351 kubelet[2555]: I0130 13:00:57.085074 2555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:00:57.819999 kubelet[2555]: I0130 13:00:57.819870 2555 topology_manager.go:215] "Topology Admit Handler" podUID="2c7d6ef6-e8ba-40b7-86a0-7b261b957993" podNamespace="kube-system" podName="kube-proxy-rkmnh" Jan 30 13:00:57.840804 systemd[1]: Created slice kubepods-besteffort-pod2c7d6ef6_e8ba_40b7_86a0_7b261b957993.slice - libcontainer container kubepods-besteffort-pod2c7d6ef6_e8ba_40b7_86a0_7b261b957993.slice. Jan 30 13:00:57.870112 kubelet[2555]: I0130 13:00:57.870039 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c7d6ef6-e8ba-40b7-86a0-7b261b957993-kube-proxy\") pod \"kube-proxy-rkmnh\" (UID: \"2c7d6ef6-e8ba-40b7-86a0-7b261b957993\") " pod="kube-system/kube-proxy-rkmnh" Jan 30 13:00:57.870112 kubelet[2555]: I0130 13:00:57.870098 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c7d6ef6-e8ba-40b7-86a0-7b261b957993-lib-modules\") pod \"kube-proxy-rkmnh\" (UID: \"2c7d6ef6-e8ba-40b7-86a0-7b261b957993\") " pod="kube-system/kube-proxy-rkmnh" Jan 30 13:00:57.870112 kubelet[2555]: I0130 13:00:57.870119 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c7d6ef6-e8ba-40b7-86a0-7b261b957993-xtables-lock\") pod \"kube-proxy-rkmnh\" (UID: \"2c7d6ef6-e8ba-40b7-86a0-7b261b957993\") " pod="kube-system/kube-proxy-rkmnh" Jan 30 13:00:57.870360 kubelet[2555]: I0130 13:00:57.870138 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5kbg\" (UniqueName: \"kubernetes.io/projected/2c7d6ef6-e8ba-40b7-86a0-7b261b957993-kube-api-access-l5kbg\") pod \"kube-proxy-rkmnh\" (UID: \"2c7d6ef6-e8ba-40b7-86a0-7b261b957993\") " pod="kube-system/kube-proxy-rkmnh" Jan 30 13:00:58.027008 kubelet[2555]: I0130 13:00:58.026943 2555 topology_manager.go:215] "Topology Admit Handler" podUID="3c3868df-affa-4fc5-962a-96fd4d086376" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-wdpbx" Jan 30 13:00:58.040506 systemd[1]: Created slice kubepods-besteffort-pod3c3868df_affa_4fc5_962a_96fd4d086376.slice - libcontainer container kubepods-besteffort-pod3c3868df_affa_4fc5_962a_96fd4d086376.slice. Jan 30 13:00:58.071521 kubelet[2555]: I0130 13:00:58.071357 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c3868df-affa-4fc5-962a-96fd4d086376-var-lib-calico\") pod \"tigera-operator-7bc55997bb-wdpbx\" (UID: \"3c3868df-affa-4fc5-962a-96fd4d086376\") " pod="tigera-operator/tigera-operator-7bc55997bb-wdpbx" Jan 30 13:00:58.071521 kubelet[2555]: I0130 13:00:58.071423 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwj2p\" (UniqueName: \"kubernetes.io/projected/3c3868df-affa-4fc5-962a-96fd4d086376-kube-api-access-pwj2p\") pod \"tigera-operator-7bc55997bb-wdpbx\" (UID: \"3c3868df-affa-4fc5-962a-96fd4d086376\") " pod="tigera-operator/tigera-operator-7bc55997bb-wdpbx" Jan 30 13:00:58.102118 update_engine[1432]: I20250130 13:00:58.101794 1432 update_attempter.cc:509] Updating boot flags... Jan 30 13:00:58.128416 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2654) Jan 30 13:00:58.159977 kubelet[2555]: E0130 13:00:58.159931 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:58.166924 containerd[1449]: time="2025-01-30T13:00:58.166789333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkmnh,Uid:2c7d6ef6-e8ba-40b7-86a0-7b261b957993,Namespace:kube-system,Attempt:0,}" Jan 30 13:00:58.171398 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2654) Jan 30 13:00:58.207141 containerd[1449]: time="2025-01-30T13:00:58.207000759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:00:58.207141 containerd[1449]: time="2025-01-30T13:00:58.207094888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:00:58.207141 containerd[1449]: time="2025-01-30T13:00:58.207112890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:58.207743 containerd[1449]: time="2025-01-30T13:00:58.207538733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:58.234669 systemd[1]: Started cri-containerd-8a85d1c0d09847b899c2a5859f1ecc198fc589c06758302ec5415eb0f37a7efd.scope - libcontainer container 8a85d1c0d09847b899c2a5859f1ecc198fc589c06758302ec5415eb0f37a7efd. Jan 30 13:00:58.255668 containerd[1449]: time="2025-01-30T13:00:58.255494494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkmnh,Uid:2c7d6ef6-e8ba-40b7-86a0-7b261b957993,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a85d1c0d09847b899c2a5859f1ecc198fc589c06758302ec5415eb0f37a7efd\"" Jan 30 13:00:58.258245 kubelet[2555]: E0130 13:00:58.258217 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:58.261020 containerd[1449]: time="2025-01-30T13:00:58.260840349Z" level=info msg="CreateContainer within sandbox \"8a85d1c0d09847b899c2a5859f1ecc198fc589c06758302ec5415eb0f37a7efd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:00:58.274561 containerd[1449]: time="2025-01-30T13:00:58.274499557Z" level=info msg="CreateContainer within sandbox \"8a85d1c0d09847b899c2a5859f1ecc198fc589c06758302ec5415eb0f37a7efd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abb1728bca8fe59db123551e4ff65897637684c769dca2e79b6dc92a096458ce\"" Jan 30 13:00:58.275356 containerd[1449]: time="2025-01-30T13:00:58.275148262Z" level=info msg="StartContainer for \"abb1728bca8fe59db123551e4ff65897637684c769dca2e79b6dc92a096458ce\"" Jan 30 13:00:58.309614 systemd[1]: Started cri-containerd-abb1728bca8fe59db123551e4ff65897637684c769dca2e79b6dc92a096458ce.scope - libcontainer container abb1728bca8fe59db123551e4ff65897637684c769dca2e79b6dc92a096458ce. Jan 30 13:00:58.337569 containerd[1449]: time="2025-01-30T13:00:58.337014016Z" level=info msg="StartContainer for \"abb1728bca8fe59db123551e4ff65897637684c769dca2e79b6dc92a096458ce\" returns successfully" Jan 30 13:00:58.345518 containerd[1449]: time="2025-01-30T13:00:58.345100585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wdpbx,Uid:3c3868df-affa-4fc5-962a-96fd4d086376,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:00:58.366952 containerd[1449]: time="2025-01-30T13:00:58.366853483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:00:58.366952 containerd[1449]: time="2025-01-30T13:00:58.366915410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:00:58.367151 containerd[1449]: time="2025-01-30T13:00:58.366927611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:58.367151 containerd[1449]: time="2025-01-30T13:00:58.367020660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:58.388616 systemd[1]: Started cri-containerd-7bf248eca1dd09893f629b39836b1cd80ecd1909260fd4899dc40d7781231789.scope - libcontainer container 7bf248eca1dd09893f629b39836b1cd80ecd1909260fd4899dc40d7781231789. Jan 30 13:00:58.422638 containerd[1449]: time="2025-01-30T13:00:58.422450530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wdpbx,Uid:3c3868df-affa-4fc5-962a-96fd4d086376,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7bf248eca1dd09893f629b39836b1cd80ecd1909260fd4899dc40d7781231789\"" Jan 30 13:00:58.425524 containerd[1449]: time="2025-01-30T13:00:58.424397725Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:00:59.171577 kubelet[2555]: E0130 13:00:59.171523 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:59.184872 kubelet[2555]: I0130 13:00:59.184578 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rkmnh" podStartSLOduration=2.184553984 podStartE2EDuration="2.184553984s" podCreationTimestamp="2025-01-30 13:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:00:59.184353484 +0000 UTC m=+15.232393750" watchObservedRunningTime="2025-01-30 13:00:59.184553984 +0000 UTC m=+15.232594250" Jan 30 13:01:03.295897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444831987.mount: Deactivated successfully. Jan 30 13:01:03.740602 containerd[1449]: time="2025-01-30T13:01:03.739660771Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:03.741926 containerd[1449]: time="2025-01-30T13:01:03.741881466Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 13:01:03.743987 containerd[1449]: time="2025-01-30T13:01:03.743937307Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:03.746532 containerd[1449]: time="2025-01-30T13:01:03.746484468Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:03.747665 containerd[1449]: time="2025-01-30T13:01:03.747620637Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 5.323182829s" Jan 30 13:01:03.747790 containerd[1449]: time="2025-01-30T13:01:03.747664401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 13:01:03.752588 containerd[1449]: time="2025-01-30T13:01:03.752539945Z" level=info msg="CreateContainer within sandbox \"7bf248eca1dd09893f629b39836b1cd80ecd1909260fd4899dc40d7781231789\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:01:03.775934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411961524.mount: Deactivated successfully. Jan 30 13:01:03.776650 containerd[1449]: time="2025-01-30T13:01:03.776478949Z" level=info msg="CreateContainer within sandbox \"7bf248eca1dd09893f629b39836b1cd80ecd1909260fd4899dc40d7781231789\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"43cbee92be771a40e17a0d8ae47fe706b7481b0f4175b46710c35e40e7ec45f9\"" Jan 30 13:01:03.777647 containerd[1449]: time="2025-01-30T13:01:03.777352457Z" level=info msg="StartContainer for \"43cbee92be771a40e17a0d8ae47fe706b7481b0f4175b46710c35e40e7ec45f9\"" Jan 30 13:01:03.820601 systemd[1]: Started cri-containerd-43cbee92be771a40e17a0d8ae47fe706b7481b0f4175b46710c35e40e7ec45f9.scope - libcontainer container 43cbee92be771a40e17a0d8ae47fe706b7481b0f4175b46710c35e40e7ec45f9. Jan 30 13:01:03.863967 containerd[1449]: time="2025-01-30T13:01:03.862238018Z" level=info msg="StartContainer for \"43cbee92be771a40e17a0d8ae47fe706b7481b0f4175b46710c35e40e7ec45f9\" returns successfully" Jan 30 13:01:04.210316 kubelet[2555]: I0130 13:01:04.210232 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-wdpbx" podStartSLOduration=1.883761896 podStartE2EDuration="7.210210631s" podCreationTimestamp="2025-01-30 13:00:57 +0000 UTC" firstStartedPulling="2025-01-30 13:00:58.424011126 +0000 UTC m=+14.472051392" lastFinishedPulling="2025-01-30 13:01:03.750459901 +0000 UTC m=+19.798500127" observedRunningTime="2025-01-30 13:01:04.208491782 +0000 UTC m=+20.256532048" watchObservedRunningTime="2025-01-30 13:01:04.210210631 +0000 UTC m=+20.258250897" Jan 30 13:01:08.419942 kubelet[2555]: I0130 13:01:08.419858 2555 topology_manager.go:215] "Topology Admit Handler" podUID="8a0f2d46-3257-4c18-8880-9376f0272bdc" podNamespace="calico-system" podName="calico-typha-85df76d97d-ttqnm" Jan 30 13:01:08.434265 systemd[1]: Created slice kubepods-besteffort-pod8a0f2d46_3257_4c18_8880_9376f0272bdc.slice - libcontainer container kubepods-besteffort-pod8a0f2d46_3257_4c18_8880_9376f0272bdc.slice. Jan 30 13:01:08.543936 kubelet[2555]: I0130 13:01:08.543887 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a0f2d46-3257-4c18-8880-9376f0272bdc-tigera-ca-bundle\") pod \"calico-typha-85df76d97d-ttqnm\" (UID: \"8a0f2d46-3257-4c18-8880-9376f0272bdc\") " pod="calico-system/calico-typha-85df76d97d-ttqnm" Jan 30 13:01:08.543936 kubelet[2555]: I0130 13:01:08.543943 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a0f2d46-3257-4c18-8880-9376f0272bdc-typha-certs\") pod \"calico-typha-85df76d97d-ttqnm\" (UID: \"8a0f2d46-3257-4c18-8880-9376f0272bdc\") " pod="calico-system/calico-typha-85df76d97d-ttqnm" Jan 30 13:01:08.543936 kubelet[2555]: I0130 13:01:08.543981 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krz5\" (UniqueName: \"kubernetes.io/projected/8a0f2d46-3257-4c18-8880-9376f0272bdc-kube-api-access-9krz5\") pod \"calico-typha-85df76d97d-ttqnm\" (UID: \"8a0f2d46-3257-4c18-8880-9376f0272bdc\") " pod="calico-system/calico-typha-85df76d97d-ttqnm" Jan 30 13:01:08.607243 kubelet[2555]: I0130 13:01:08.607176 2555 topology_manager.go:215] "Topology Admit Handler" podUID="1dc2932b-a11e-42d6-9846-7525f335763c" podNamespace="calico-system" podName="calico-node-fcd9f" Jan 30 13:01:08.617003 systemd[1]: Created slice kubepods-besteffort-pod1dc2932b_a11e_42d6_9846_7525f335763c.slice - libcontainer container kubepods-besteffort-pod1dc2932b_a11e_42d6_9846_7525f335763c.slice. Jan 30 13:01:08.737793 kubelet[2555]: E0130 13:01:08.737674 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:08.738332 containerd[1449]: time="2025-01-30T13:01:08.738278166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85df76d97d-ttqnm,Uid:8a0f2d46-3257-4c18-8880-9376f0272bdc,Namespace:calico-system,Attempt:0,}" Jan 30 13:01:08.745151 kubelet[2555]: I0130 13:01:08.744850 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-cni-bin-dir\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745151 kubelet[2555]: I0130 13:01:08.744908 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-xtables-lock\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745151 kubelet[2555]: I0130 13:01:08.744929 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-policysync\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745151 kubelet[2555]: I0130 13:01:08.744947 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-cni-log-dir\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745151 kubelet[2555]: I0130 13:01:08.744970 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1dc2932b-a11e-42d6-9846-7525f335763c-node-certs\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745392 kubelet[2555]: I0130 13:01:08.744987 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-flexvol-driver-host\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745392 kubelet[2555]: I0130 13:01:08.745005 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-cni-net-dir\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745392 kubelet[2555]: I0130 13:01:08.745022 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-var-run-calico\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745392 kubelet[2555]: I0130 13:01:08.745038 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-var-lib-calico\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745392 kubelet[2555]: I0130 13:01:08.745056 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dc2932b-a11e-42d6-9846-7525f335763c-lib-modules\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745514 kubelet[2555]: I0130 13:01:08.745074 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dc2932b-a11e-42d6-9846-7525f335763c-tigera-ca-bundle\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.745514 kubelet[2555]: I0130 13:01:08.745100 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvxj\" (UniqueName: \"kubernetes.io/projected/1dc2932b-a11e-42d6-9846-7525f335763c-kube-api-access-6pvxj\") pod \"calico-node-fcd9f\" (UID: \"1dc2932b-a11e-42d6-9846-7525f335763c\") " pod="calico-system/calico-node-fcd9f" Jan 30 13:01:08.797131 kubelet[2555]: I0130 13:01:08.797059 2555 topology_manager.go:215] "Topology Admit Handler" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" podNamespace="calico-system" podName="csi-node-driver-95cpz" Jan 30 13:01:08.797420 kubelet[2555]: E0130 13:01:08.797391 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-95cpz" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" Jan 30 13:01:08.818177 containerd[1449]: time="2025-01-30T13:01:08.817958361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:08.818177 containerd[1449]: time="2025-01-30T13:01:08.818060248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:08.818177 containerd[1449]: time="2025-01-30T13:01:08.818073448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:08.818936 containerd[1449]: time="2025-01-30T13:01:08.818620923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:08.839264 systemd[1]: run-containerd-runc-k8s.io-73d5d152f8da82fd4c1866b4020869819cb53e03082ab5b004ba728c68cafd11-runc.KS4OjD.mount: Deactivated successfully. Jan 30 13:01:08.849627 systemd[1]: Started cri-containerd-73d5d152f8da82fd4c1866b4020869819cb53e03082ab5b004ba728c68cafd11.scope - libcontainer container 73d5d152f8da82fd4c1866b4020869819cb53e03082ab5b004ba728c68cafd11. Jan 30 13:01:08.850357 kubelet[2555]: E0130 13:01:08.850287 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.850357 kubelet[2555]: W0130 13:01:08.850319 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.850357 kubelet[2555]: E0130 13:01:08.850344 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.850622 kubelet[2555]: E0130 13:01:08.850605 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.850622 kubelet[2555]: W0130 13:01:08.850619 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.850948 kubelet[2555]: E0130 13:01:08.850630 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.850948 kubelet[2555]: E0130 13:01:08.850918 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.850948 kubelet[2555]: W0130 13:01:08.850929 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.850948 kubelet[2555]: E0130 13:01:08.850940 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.851179 kubelet[2555]: E0130 13:01:08.851164 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.851179 kubelet[2555]: W0130 13:01:08.851177 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.851257 kubelet[2555]: E0130 13:01:08.851194 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.851376 kubelet[2555]: E0130 13:01:08.851353 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.851502 kubelet[2555]: W0130 13:01:08.851464 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.851502 kubelet[2555]: E0130 13:01:08.851488 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.851696 kubelet[2555]: E0130 13:01:08.851659 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.851696 kubelet[2555]: W0130 13:01:08.851687 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.851696 kubelet[2555]: E0130 13:01:08.851696 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.852590 kubelet[2555]: E0130 13:01:08.852563 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.852882 kubelet[2555]: W0130 13:01:08.852712 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.852882 kubelet[2555]: E0130 13:01:08.852749 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.853388 kubelet[2555]: E0130 13:01:08.853259 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.853388 kubelet[2555]: W0130 13:01:08.853275 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.853388 kubelet[2555]: E0130 13:01:08.853305 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.853928 kubelet[2555]: E0130 13:01:08.853857 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.853928 kubelet[2555]: W0130 13:01:08.853898 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.853928 kubelet[2555]: E0130 13:01:08.853928 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.854487 kubelet[2555]: E0130 13:01:08.854153 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.854487 kubelet[2555]: W0130 13:01:08.854167 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.854487 kubelet[2555]: E0130 13:01:08.854229 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.854487 kubelet[2555]: E0130 13:01:08.854399 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.854487 kubelet[2555]: W0130 13:01:08.854408 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.854487 kubelet[2555]: E0130 13:01:08.854467 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.854657 kubelet[2555]: E0130 13:01:08.854600 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.854657 kubelet[2555]: W0130 13:01:08.854623 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.854706 kubelet[2555]: E0130 13:01:08.854674 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.854854 kubelet[2555]: E0130 13:01:08.854836 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.854854 kubelet[2555]: W0130 13:01:08.854849 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.855028 kubelet[2555]: E0130 13:01:08.854922 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.855028 kubelet[2555]: E0130 13:01:08.855013 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.855028 kubelet[2555]: W0130 13:01:08.855022 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.855124 kubelet[2555]: E0130 13:01:08.855107 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.855351 kubelet[2555]: E0130 13:01:08.855335 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.855351 kubelet[2555]: W0130 13:01:08.855349 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.855522 kubelet[2555]: E0130 13:01:08.855487 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.855610 kubelet[2555]: E0130 13:01:08.855590 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.855610 kubelet[2555]: W0130 13:01:08.855607 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.855790 kubelet[2555]: E0130 13:01:08.855674 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.855854 kubelet[2555]: E0130 13:01:08.855836 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.855854 kubelet[2555]: W0130 13:01:08.855852 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.855932 kubelet[2555]: E0130 13:01:08.855914 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.856160 kubelet[2555]: E0130 13:01:08.856145 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.856214 kubelet[2555]: W0130 13:01:08.856157 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.856309 kubelet[2555]: E0130 13:01:08.856271 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.856442 kubelet[2555]: E0130 13:01:08.856423 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.856484 kubelet[2555]: W0130 13:01:08.856442 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.856619 kubelet[2555]: E0130 13:01:08.856545 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.856707 kubelet[2555]: E0130 13:01:08.856687 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.856707 kubelet[2555]: W0130 13:01:08.856699 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.856833 kubelet[2555]: E0130 13:01:08.856794 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.856934 kubelet[2555]: E0130 13:01:08.856914 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.856934 kubelet[2555]: W0130 13:01:08.856930 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.857039 kubelet[2555]: E0130 13:01:08.857014 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.857205 kubelet[2555]: E0130 13:01:08.857191 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.857205 kubelet[2555]: W0130 13:01:08.857204 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.857306 kubelet[2555]: E0130 13:01:08.857232 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.857647 kubelet[2555]: E0130 13:01:08.857620 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.857647 kubelet[2555]: W0130 13:01:08.857635 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.857733 kubelet[2555]: E0130 13:01:08.857661 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.858858 kubelet[2555]: E0130 13:01:08.857860 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.858858 kubelet[2555]: W0130 13:01:08.857904 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.858858 kubelet[2555]: E0130 13:01:08.857915 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.875306 kubelet[2555]: E0130 13:01:08.875273 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.875440 kubelet[2555]: W0130 13:01:08.875294 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.875440 kubelet[2555]: E0130 13:01:08.875394 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.900069 containerd[1449]: time="2025-01-30T13:01:08.899800573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85df76d97d-ttqnm,Uid:8a0f2d46-3257-4c18-8880-9376f0272bdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"73d5d152f8da82fd4c1866b4020869819cb53e03082ab5b004ba728c68cafd11\"" Jan 30 13:01:08.902332 kubelet[2555]: E0130 13:01:08.902298 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:08.905398 containerd[1449]: time="2025-01-30T13:01:08.904669881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:01:08.919706 kubelet[2555]: E0130 13:01:08.919674 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:08.920501 containerd[1449]: time="2025-01-30T13:01:08.920462199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fcd9f,Uid:1dc2932b-a11e-42d6-9846-7525f335763c,Namespace:calico-system,Attempt:0,}" Jan 30 13:01:08.946943 kubelet[2555]: E0130 13:01:08.946748 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.946943 kubelet[2555]: W0130 13:01:08.946775 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.946943 kubelet[2555]: E0130 13:01:08.946796 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.946943 kubelet[2555]: I0130 13:01:08.946828 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aa3d79e1-a896-409f-b82d-b2c0db403513-varrun\") pod \"csi-node-driver-95cpz\" (UID: \"aa3d79e1-a896-409f-b82d-b2c0db403513\") " pod="calico-system/csi-node-driver-95cpz" Jan 30 13:01:08.948111 kubelet[2555]: E0130 13:01:08.947908 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.948111 kubelet[2555]: W0130 13:01:08.947933 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.948111 kubelet[2555]: E0130 13:01:08.947959 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.948111 kubelet[2555]: I0130 13:01:08.947985 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aa3d79e1-a896-409f-b82d-b2c0db403513-registration-dir\") pod \"csi-node-driver-95cpz\" (UID: \"aa3d79e1-a896-409f-b82d-b2c0db403513\") " pod="calico-system/csi-node-driver-95cpz" Jan 30 13:01:08.948306 containerd[1449]: time="2025-01-30T13:01:08.948112106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:08.948306 containerd[1449]: time="2025-01-30T13:01:08.948287357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:08.948306 containerd[1449]: time="2025-01-30T13:01:08.948300718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:08.948567 kubelet[2555]: E0130 13:01:08.948546 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.948771 containerd[1449]: time="2025-01-30T13:01:08.948518532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:08.948923 kubelet[2555]: W0130 13:01:08.948899 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.949406 kubelet[2555]: E0130 13:01:08.949029 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.949406 kubelet[2555]: I0130 13:01:08.949078 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aa3d79e1-a896-409f-b82d-b2c0db403513-socket-dir\") pod \"csi-node-driver-95cpz\" (UID: \"aa3d79e1-a896-409f-b82d-b2c0db403513\") " pod="calico-system/csi-node-driver-95cpz" Jan 30 13:01:08.949604 kubelet[2555]: E0130 13:01:08.949586 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.949675 kubelet[2555]: W0130 13:01:08.949661 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.949775 kubelet[2555]: E0130 13:01:08.949751 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.950030 kubelet[2555]: E0130 13:01:08.950013 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.950103 kubelet[2555]: W0130 13:01:08.950090 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.950193 kubelet[2555]: E0130 13:01:08.950167 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.950562 kubelet[2555]: E0130 13:01:08.950447 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.950562 kubelet[2555]: W0130 13:01:08.950462 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.950562 kubelet[2555]: E0130 13:01:08.950489 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.950562 kubelet[2555]: I0130 13:01:08.950533 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa3d79e1-a896-409f-b82d-b2c0db403513-kubelet-dir\") pod \"csi-node-driver-95cpz\" (UID: \"aa3d79e1-a896-409f-b82d-b2c0db403513\") " pod="calico-system/csi-node-driver-95cpz" Jan 30 13:01:08.950816 kubelet[2555]: E0130 13:01:08.950801 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.950880 kubelet[2555]: W0130 13:01:08.950868 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.950968 kubelet[2555]: E0130 13:01:08.950944 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.951212 kubelet[2555]: E0130 13:01:08.951194 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.951293 kubelet[2555]: W0130 13:01:08.951278 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.951353 kubelet[2555]: E0130 13:01:08.951342 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.951733 kubelet[2555]: E0130 13:01:08.951614 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.951733 kubelet[2555]: W0130 13:01:08.951630 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.951733 kubelet[2555]: E0130 13:01:08.951653 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.951939 kubelet[2555]: E0130 13:01:08.951923 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.951998 kubelet[2555]: W0130 13:01:08.951986 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.952080 kubelet[2555]: E0130 13:01:08.952063 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.952158 kubelet[2555]: I0130 13:01:08.952144 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj89n\" (UniqueName: \"kubernetes.io/projected/aa3d79e1-a896-409f-b82d-b2c0db403513-kube-api-access-hj89n\") pod \"csi-node-driver-95cpz\" (UID: \"aa3d79e1-a896-409f-b82d-b2c0db403513\") " pod="calico-system/csi-node-driver-95cpz" Jan 30 13:01:08.952462 kubelet[2555]: E0130 13:01:08.952406 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.952462 kubelet[2555]: W0130 13:01:08.952430 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.952462 kubelet[2555]: E0130 13:01:08.952446 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.952713 kubelet[2555]: E0130 13:01:08.952684 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.952713 kubelet[2555]: W0130 13:01:08.952700 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.952779 kubelet[2555]: E0130 13:01:08.952723 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.952980 kubelet[2555]: E0130 13:01:08.952963 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.952980 kubelet[2555]: W0130 13:01:08.952978 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.953053 kubelet[2555]: E0130 13:01:08.952988 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.953296 kubelet[2555]: E0130 13:01:08.953279 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.953296 kubelet[2555]: W0130 13:01:08.953293 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.953356 kubelet[2555]: E0130 13:01:08.953304 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.953578 kubelet[2555]: E0130 13:01:08.953560 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:08.953578 kubelet[2555]: W0130 13:01:08.953576 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:08.953655 kubelet[2555]: E0130 13:01:08.953586 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:08.971731 systemd[1]: Started cri-containerd-17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1.scope - libcontainer container 17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1. Jan 30 13:01:08.997101 containerd[1449]: time="2025-01-30T13:01:08.996986395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fcd9f,Uid:1dc2932b-a11e-42d6-9846-7525f335763c,Namespace:calico-system,Attempt:0,} returns sandbox id \"17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1\"" Jan 30 13:01:08.998879 kubelet[2555]: E0130 13:01:08.998838 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:09.054048 kubelet[2555]: E0130 13:01:09.054004 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.054048 kubelet[2555]: W0130 13:01:09.054031 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.054048 kubelet[2555]: E0130 13:01:09.054051 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.054250 kubelet[2555]: E0130 13:01:09.054234 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.054250 kubelet[2555]: W0130 13:01:09.054242 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.054302 kubelet[2555]: E0130 13:01:09.054259 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.054539 kubelet[2555]: E0130 13:01:09.054506 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.054539 kubelet[2555]: W0130 13:01:09.054521 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.054539 kubelet[2555]: E0130 13:01:09.054535 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.054740 kubelet[2555]: E0130 13:01:09.054718 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.054740 kubelet[2555]: W0130 13:01:09.054730 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.054740 kubelet[2555]: E0130 13:01:09.054740 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.054964 kubelet[2555]: E0130 13:01:09.054940 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.054964 kubelet[2555]: W0130 13:01:09.054954 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.055020 kubelet[2555]: E0130 13:01:09.054972 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.055205 kubelet[2555]: E0130 13:01:09.055184 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.055205 kubelet[2555]: W0130 13:01:09.055198 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.055266 kubelet[2555]: E0130 13:01:09.055211 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.055389 kubelet[2555]: E0130 13:01:09.055375 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.055389 kubelet[2555]: W0130 13:01:09.055386 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.055570 kubelet[2555]: E0130 13:01:09.055440 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.055637 kubelet[2555]: E0130 13:01:09.055596 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.055637 kubelet[2555]: W0130 13:01:09.055604 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.055637 kubelet[2555]: E0130 13:01:09.055625 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.055794 kubelet[2555]: E0130 13:01:09.055779 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.055794 kubelet[2555]: W0130 13:01:09.055790 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.055851 kubelet[2555]: E0130 13:01:09.055812 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.055936 kubelet[2555]: E0130 13:01:09.055925 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.055966 kubelet[2555]: W0130 13:01:09.055935 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.055966 kubelet[2555]: E0130 13:01:09.055953 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.056148 kubelet[2555]: E0130 13:01:09.056122 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.056148 kubelet[2555]: W0130 13:01:09.056137 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.056207 kubelet[2555]: E0130 13:01:09.056156 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.056298 kubelet[2555]: E0130 13:01:09.056288 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.056298 kubelet[2555]: W0130 13:01:09.056297 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.056351 kubelet[2555]: E0130 13:01:09.056310 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.056532 kubelet[2555]: E0130 13:01:09.056521 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.056578 kubelet[2555]: W0130 13:01:09.056532 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.056578 kubelet[2555]: E0130 13:01:09.056550 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.056801 kubelet[2555]: E0130 13:01:09.056737 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.056801 kubelet[2555]: W0130 13:01:09.056748 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.056801 kubelet[2555]: E0130 13:01:09.056766 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.056940 kubelet[2555]: E0130 13:01:09.056927 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.056940 kubelet[2555]: W0130 13:01:09.056938 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.057003 kubelet[2555]: E0130 13:01:09.056958 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.057096 kubelet[2555]: E0130 13:01:09.057084 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.057127 kubelet[2555]: W0130 13:01:09.057096 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.057127 kubelet[2555]: E0130 13:01:09.057116 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.057255 kubelet[2555]: E0130 13:01:09.057243 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.057281 kubelet[2555]: W0130 13:01:09.057255 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.057305 kubelet[2555]: E0130 13:01:09.057277 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.057407 kubelet[2555]: E0130 13:01:09.057395 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.057452 kubelet[2555]: W0130 13:01:09.057411 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.057452 kubelet[2555]: E0130 13:01:09.057439 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.057596 kubelet[2555]: E0130 13:01:09.057584 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.057596 kubelet[2555]: W0130 13:01:09.057595 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.057641 kubelet[2555]: E0130 13:01:09.057608 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.057784 kubelet[2555]: E0130 13:01:09.057772 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.057784 kubelet[2555]: W0130 13:01:09.057783 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.057878 kubelet[2555]: E0130 13:01:09.057795 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.057972 kubelet[2555]: E0130 13:01:09.057960 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.057972 kubelet[2555]: W0130 13:01:09.057970 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.058025 kubelet[2555]: E0130 13:01:09.057983 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.058173 kubelet[2555]: E0130 13:01:09.058160 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.058205 kubelet[2555]: W0130 13:01:09.058173 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.058205 kubelet[2555]: E0130 13:01:09.058187 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.058478 kubelet[2555]: E0130 13:01:09.058463 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.058516 kubelet[2555]: W0130 13:01:09.058480 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.058516 kubelet[2555]: E0130 13:01:09.058493 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.058711 kubelet[2555]: E0130 13:01:09.058697 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.058711 kubelet[2555]: W0130 13:01:09.058707 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.058776 kubelet[2555]: E0130 13:01:09.058721 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.059103 kubelet[2555]: E0130 13:01:09.059083 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.059141 kubelet[2555]: W0130 13:01:09.059102 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.059141 kubelet[2555]: E0130 13:01:09.059116 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:09.070477 kubelet[2555]: E0130 13:01:09.070445 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:09.070477 kubelet[2555]: W0130 13:01:09.070471 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:09.070477 kubelet[2555]: E0130 13:01:09.070491 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:10.082270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222548460.mount: Deactivated successfully. Jan 30 13:01:10.099243 kubelet[2555]: E0130 13:01:10.098917 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-95cpz" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" Jan 30 13:01:10.596673 containerd[1449]: time="2025-01-30T13:01:10.596275696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:10.597189 containerd[1449]: time="2025-01-30T13:01:10.597155108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 13:01:10.599071 containerd[1449]: time="2025-01-30T13:01:10.597636576Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:10.600218 containerd[1449]: time="2025-01-30T13:01:10.600188044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:10.601110 containerd[1449]: time="2025-01-30T13:01:10.601067096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.696354852s" Jan 30 13:01:10.601110 containerd[1449]: time="2025-01-30T13:01:10.601105498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 13:01:10.602599 containerd[1449]: time="2025-01-30T13:01:10.602560023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:01:10.615538 containerd[1449]: time="2025-01-30T13:01:10.615482376Z" level=info msg="CreateContainer within sandbox \"73d5d152f8da82fd4c1866b4020869819cb53e03082ab5b004ba728c68cafd11\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:01:10.625565 containerd[1449]: time="2025-01-30T13:01:10.625513400Z" level=info msg="CreateContainer within sandbox \"73d5d152f8da82fd4c1866b4020869819cb53e03082ab5b004ba728c68cafd11\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7f0b8be5978e5b3422b21fd5c8e6f4477f52cf0dad0dba2eac6661a8705b82ba\"" Jan 30 13:01:10.626048 containerd[1449]: time="2025-01-30T13:01:10.626021750Z" level=info msg="StartContainer for \"7f0b8be5978e5b3422b21fd5c8e6f4477f52cf0dad0dba2eac6661a8705b82ba\"" Jan 30 13:01:10.657558 systemd[1]: Started cri-containerd-7f0b8be5978e5b3422b21fd5c8e6f4477f52cf0dad0dba2eac6661a8705b82ba.scope - libcontainer container 7f0b8be5978e5b3422b21fd5c8e6f4477f52cf0dad0dba2eac6661a8705b82ba. Jan 30 13:01:10.751104 containerd[1449]: time="2025-01-30T13:01:10.750481881Z" level=info msg="StartContainer for \"7f0b8be5978e5b3422b21fd5c8e6f4477f52cf0dad0dba2eac6661a8705b82ba\" returns successfully" Jan 30 13:01:11.203750 kubelet[2555]: E0130 13:01:11.203717 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:11.216533 kubelet[2555]: I0130 13:01:11.216435 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85df76d97d-ttqnm" podStartSLOduration=1.518155178 podStartE2EDuration="3.216414706s" podCreationTimestamp="2025-01-30 13:01:08 +0000 UTC" firstStartedPulling="2025-01-30 13:01:08.90386975 +0000 UTC m=+24.951910016" lastFinishedPulling="2025-01-30 13:01:10.602129238 +0000 UTC m=+26.650169544" observedRunningTime="2025-01-30 13:01:11.213967649 +0000 UTC m=+27.262007915" watchObservedRunningTime="2025-01-30 13:01:11.216414706 +0000 UTC m=+27.264454972" Jan 30 13:01:11.261783 kubelet[2555]: E0130 13:01:11.261725 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.262063 kubelet[2555]: W0130 13:01:11.261751 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.262063 kubelet[2555]: E0130 13:01:11.261882 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.262490 kubelet[2555]: E0130 13:01:11.262350 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.262490 kubelet[2555]: W0130 13:01:11.262385 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.262490 kubelet[2555]: E0130 13:01:11.262400 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.262685 kubelet[2555]: E0130 13:01:11.262671 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.262743 kubelet[2555]: W0130 13:01:11.262732 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.262824 kubelet[2555]: E0130 13:01:11.262812 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.263044 kubelet[2555]: E0130 13:01:11.263030 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.263116 kubelet[2555]: W0130 13:01:11.263104 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.263179 kubelet[2555]: E0130 13:01:11.263167 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.263529 kubelet[2555]: E0130 13:01:11.263434 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.263529 kubelet[2555]: W0130 13:01:11.263448 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.263529 kubelet[2555]: E0130 13:01:11.263457 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.263867 kubelet[2555]: E0130 13:01:11.263817 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.263962 kubelet[2555]: W0130 13:01:11.263948 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.264050 kubelet[2555]: E0130 13:01:11.264037 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.264450 kubelet[2555]: E0130 13:01:11.264432 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.264568 kubelet[2555]: W0130 13:01:11.264552 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.264736 kubelet[2555]: E0130 13:01:11.264636 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.264974 kubelet[2555]: E0130 13:01:11.264959 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.265050 kubelet[2555]: W0130 13:01:11.265037 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.265110 kubelet[2555]: E0130 13:01:11.265098 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.265438 kubelet[2555]: E0130 13:01:11.265330 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.265438 kubelet[2555]: W0130 13:01:11.265343 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.265438 kubelet[2555]: E0130 13:01:11.265353 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.265761 kubelet[2555]: E0130 13:01:11.265658 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.265761 kubelet[2555]: W0130 13:01:11.265671 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.265761 kubelet[2555]: E0130 13:01:11.265681 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.265950 kubelet[2555]: E0130 13:01:11.265937 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.266008 kubelet[2555]: W0130 13:01:11.265997 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.266069 kubelet[2555]: E0130 13:01:11.266057 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.266355 kubelet[2555]: E0130 13:01:11.266261 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.266864 kubelet[2555]: W0130 13:01:11.266451 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.267069 kubelet[2555]: E0130 13:01:11.266958 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.267342 kubelet[2555]: E0130 13:01:11.267173 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.267342 kubelet[2555]: W0130 13:01:11.267186 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.267342 kubelet[2555]: E0130 13:01:11.267195 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.267614 kubelet[2555]: E0130 13:01:11.267598 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.267684 kubelet[2555]: W0130 13:01:11.267672 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.267765 kubelet[2555]: E0130 13:01:11.267752 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.268556 kubelet[2555]: E0130 13:01:11.268538 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.268847 kubelet[2555]: W0130 13:01:11.268628 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.268847 kubelet[2555]: E0130 13:01:11.268648 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.268971 kubelet[2555]: E0130 13:01:11.268958 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.269029 kubelet[2555]: W0130 13:01:11.269017 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.269091 kubelet[2555]: E0130 13:01:11.269080 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.269416 kubelet[2555]: E0130 13:01:11.269352 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.269487 kubelet[2555]: W0130 13:01:11.269474 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.269562 kubelet[2555]: E0130 13:01:11.269549 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.270106 kubelet[2555]: E0130 13:01:11.270089 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.270216 kubelet[2555]: W0130 13:01:11.270197 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.270683 kubelet[2555]: E0130 13:01:11.270661 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.270947 kubelet[2555]: E0130 13:01:11.270921 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.270994 kubelet[2555]: W0130 13:01:11.270947 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.270994 kubelet[2555]: E0130 13:01:11.270967 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.271155 kubelet[2555]: E0130 13:01:11.271139 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.271155 kubelet[2555]: W0130 13:01:11.271152 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.271216 kubelet[2555]: E0130 13:01:11.271184 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.271362 kubelet[2555]: E0130 13:01:11.271344 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.271362 kubelet[2555]: W0130 13:01:11.271360 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.271439 kubelet[2555]: E0130 13:01:11.271426 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.271582 kubelet[2555]: E0130 13:01:11.271567 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.271582 kubelet[2555]: W0130 13:01:11.271581 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.271640 kubelet[2555]: E0130 13:01:11.271596 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.271841 kubelet[2555]: E0130 13:01:11.271779 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.271878 kubelet[2555]: W0130 13:01:11.271842 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.271878 kubelet[2555]: E0130 13:01:11.271861 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.272412 kubelet[2555]: E0130 13:01:11.272122 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.272412 kubelet[2555]: W0130 13:01:11.272148 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.272412 kubelet[2555]: E0130 13:01:11.272210 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.272623 kubelet[2555]: E0130 13:01:11.272604 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.272623 kubelet[2555]: W0130 13:01:11.272616 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.272759 kubelet[2555]: E0130 13:01:11.272674 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.272947 kubelet[2555]: E0130 13:01:11.272933 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.272947 kubelet[2555]: W0130 13:01:11.272947 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.273014 kubelet[2555]: E0130 13:01:11.272974 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.273265 kubelet[2555]: E0130 13:01:11.273194 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.273265 kubelet[2555]: W0130 13:01:11.273206 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.273265 kubelet[2555]: E0130 13:01:11.273235 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.274275 kubelet[2555]: E0130 13:01:11.273391 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.274275 kubelet[2555]: W0130 13:01:11.273405 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.274275 kubelet[2555]: E0130 13:01:11.273420 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.274275 kubelet[2555]: E0130 13:01:11.273605 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.274275 kubelet[2555]: W0130 13:01:11.273621 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.274275 kubelet[2555]: E0130 13:01:11.273635 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.274275 kubelet[2555]: E0130 13:01:11.273932 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.274275 kubelet[2555]: W0130 13:01:11.273945 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.274275 kubelet[2555]: E0130 13:01:11.273956 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.274275 kubelet[2555]: E0130 13:01:11.274161 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.274596 kubelet[2555]: W0130 13:01:11.274169 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.274596 kubelet[2555]: E0130 13:01:11.274184 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.274596 kubelet[2555]: E0130 13:01:11.274355 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.274596 kubelet[2555]: W0130 13:01:11.274409 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.274596 kubelet[2555]: E0130 13:01:11.274491 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:11.274801 kubelet[2555]: E0130 13:01:11.274743 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:01:11.274801 kubelet[2555]: W0130 13:01:11.274798 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:01:11.274859 kubelet[2555]: E0130 13:01:11.274810 2555 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:01:12.041447 containerd[1449]: time="2025-01-30T13:01:12.041400122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:12.044088 containerd[1449]: time="2025-01-30T13:01:12.044050945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 13:01:12.045168 containerd[1449]: time="2025-01-30T13:01:12.045120682Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:12.049602 containerd[1449]: time="2025-01-30T13:01:12.049550321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:12.051248 containerd[1449]: time="2025-01-30T13:01:12.051192530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.448592025s" Jan 30 13:01:12.051291 containerd[1449]: time="2025-01-30T13:01:12.051246893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 13:01:12.058932 containerd[1449]: time="2025-01-30T13:01:12.058877704Z" level=info msg="CreateContainer within sandbox \"17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:01:12.078301 containerd[1449]: time="2025-01-30T13:01:12.078237188Z" level=info msg="CreateContainer within sandbox \"17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a\"" Jan 30 13:01:12.079288 containerd[1449]: time="2025-01-30T13:01:12.078962627Z" level=info msg="StartContainer for \"db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a\"" Jan 30 13:01:12.101499 kubelet[2555]: E0130 13:01:12.100765 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-95cpz" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" Jan 30 13:01:12.116633 systemd[1]: Started cri-containerd-db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a.scope - libcontainer container db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a. Jan 30 13:01:12.160185 containerd[1449]: time="2025-01-30T13:01:12.159787986Z" level=info msg="StartContainer for \"db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a\" returns successfully" Jan 30 13:01:12.207728 kubelet[2555]: E0130 13:01:12.207695 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:12.209477 kubelet[2555]: I0130 13:01:12.209423 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:01:12.211647 kubelet[2555]: E0130 13:01:12.211617 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:12.211859 systemd[1]: cri-containerd-db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a.scope: Deactivated successfully. Jan 30 13:01:12.250112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a-rootfs.mount: Deactivated successfully. Jan 30 13:01:12.274756 containerd[1449]: time="2025-01-30T13:01:12.270508437Z" level=info msg="shim disconnected" id=db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a namespace=k8s.io Jan 30 13:01:12.274984 containerd[1449]: time="2025-01-30T13:01:12.274758706Z" level=warning msg="cleaning up after shim disconnected" id=db08bdd88556359785bf22363abdf8dd04e3ee6028d571236e3d2256ebb54e8a namespace=k8s.io Jan 30 13:01:12.274984 containerd[1449]: time="2025-01-30T13:01:12.274981878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:01:13.210584 kubelet[2555]: E0130 13:01:13.210545 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:13.213737 containerd[1449]: time="2025-01-30T13:01:13.213686681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:01:14.099181 kubelet[2555]: E0130 13:01:14.099128 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-95cpz" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" Jan 30 13:01:16.100935 kubelet[2555]: E0130 13:01:16.099695 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-95cpz" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" Jan 30 13:01:16.145530 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:41652.service - OpenSSH per-connection server daemon (10.0.0.1:41652). Jan 30 13:01:16.208450 sshd[3269]: Accepted publickey for core from 10.0.0.1 port 41652 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:16.209926 sshd[3269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:16.219472 systemd-logind[1427]: New session 8 of user core. Jan 30 13:01:16.227034 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:01:16.390835 sshd[3269]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:16.396512 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:41652.service: Deactivated successfully. Jan 30 13:01:16.401231 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:01:16.402739 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:01:16.405175 systemd-logind[1427]: Removed session 8. Jan 30 13:01:16.925968 containerd[1449]: time="2025-01-30T13:01:16.925911437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:16.926497 containerd[1449]: time="2025-01-30T13:01:16.926399620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 13:01:16.927357 containerd[1449]: time="2025-01-30T13:01:16.927327064Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:16.929410 containerd[1449]: time="2025-01-30T13:01:16.929361519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:16.930400 containerd[1449]: time="2025-01-30T13:01:16.930350845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.716614881s" Jan 30 13:01:16.930460 containerd[1449]: time="2025-01-30T13:01:16.930406888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 13:01:16.936520 containerd[1449]: time="2025-01-30T13:01:16.936302963Z" level=info msg="CreateContainer within sandbox \"17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:01:16.958571 containerd[1449]: time="2025-01-30T13:01:16.958509042Z" level=info msg="CreateContainer within sandbox \"17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261\"" Jan 30 13:01:16.959077 containerd[1449]: time="2025-01-30T13:01:16.959050467Z" level=info msg="StartContainer for \"8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261\"" Jan 30 13:01:16.993589 systemd[1]: Started cri-containerd-8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261.scope - libcontainer container 8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261. Jan 30 13:01:17.035482 containerd[1449]: time="2025-01-30T13:01:17.035327863Z" level=info msg="StartContainer for \"8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261\" returns successfully" Jan 30 13:01:17.223510 kubelet[2555]: E0130 13:01:17.223385 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:17.608145 systemd[1]: cri-containerd-8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261.scope: Deactivated successfully. Jan 30 13:01:17.632508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261-rootfs.mount: Deactivated successfully. Jan 30 13:01:17.654951 kubelet[2555]: I0130 13:01:17.654910 2555 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:01:17.700578 kubelet[2555]: I0130 13:01:17.697853 2555 topology_manager.go:215] "Topology Admit Handler" podUID="09ee933e-c443-4e73-95ea-87ea4c5a82d4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bs7p9" Jan 30 13:01:17.700578 kubelet[2555]: I0130 13:01:17.698584 2555 topology_manager.go:215] "Topology Admit Handler" podUID="7b4c1231-0ce0-43a4-b55a-08522cf916ab" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dj4pf" Jan 30 13:01:17.700578 kubelet[2555]: I0130 13:01:17.698713 2555 topology_manager.go:215] "Topology Admit Handler" podUID="130f9fb6-465d-46d9-b55d-70f0e7e76a1d" podNamespace="calico-system" podName="calico-kube-controllers-6cdf574769-hlwjk" Jan 30 13:01:17.700578 kubelet[2555]: I0130 13:01:17.698804 2555 topology_manager.go:215] "Topology Admit Handler" podUID="29437a55-4f91-4be7-b561-40aba478f597" podNamespace="calico-apiserver" podName="calico-apiserver-788bf5f94c-wlb8t" Jan 30 13:01:17.700578 kubelet[2555]: I0130 13:01:17.699143 2555 topology_manager.go:215] "Topology Admit Handler" podUID="9715b94b-ac1c-4ef7-884a-0cc0442ebce5" podNamespace="calico-apiserver" podName="calico-apiserver-788bf5f94c-fbrwl" Jan 30 13:01:17.702446 containerd[1449]: time="2025-01-30T13:01:17.701762660Z" level=info msg="shim disconnected" id=8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261 namespace=k8s.io Jan 30 13:01:17.702446 containerd[1449]: time="2025-01-30T13:01:17.701818742Z" level=warning msg="cleaning up after shim disconnected" id=8762695c149f8d98d4ec2d94538c884da07079e0c644c400011fe90a8988a261 namespace=k8s.io Jan 30 13:01:17.702446 containerd[1449]: time="2025-01-30T13:01:17.701828142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:01:17.719316 kubelet[2555]: I0130 13:01:17.717601 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894h5\" (UniqueName: \"kubernetes.io/projected/9715b94b-ac1c-4ef7-884a-0cc0442ebce5-kube-api-access-894h5\") pod \"calico-apiserver-788bf5f94c-fbrwl\" (UID: \"9715b94b-ac1c-4ef7-884a-0cc0442ebce5\") " pod="calico-apiserver/calico-apiserver-788bf5f94c-fbrwl" Jan 30 13:01:17.719316 kubelet[2555]: I0130 13:01:17.717646 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29437a55-4f91-4be7-b561-40aba478f597-calico-apiserver-certs\") pod \"calico-apiserver-788bf5f94c-wlb8t\" (UID: \"29437a55-4f91-4be7-b561-40aba478f597\") " pod="calico-apiserver/calico-apiserver-788bf5f94c-wlb8t" Jan 30 13:01:17.719316 kubelet[2555]: I0130 13:01:17.717668 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09ee933e-c443-4e73-95ea-87ea4c5a82d4-config-volume\") pod \"coredns-7db6d8ff4d-bs7p9\" (UID: \"09ee933e-c443-4e73-95ea-87ea4c5a82d4\") " pod="kube-system/coredns-7db6d8ff4d-bs7p9" Jan 30 13:01:17.719316 kubelet[2555]: I0130 13:01:17.717686 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b4c1231-0ce0-43a4-b55a-08522cf916ab-config-volume\") pod \"coredns-7db6d8ff4d-dj4pf\" (UID: \"7b4c1231-0ce0-43a4-b55a-08522cf916ab\") " pod="kube-system/coredns-7db6d8ff4d-dj4pf" Jan 30 13:01:17.719316 kubelet[2555]: I0130 13:01:17.717704 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkph8\" (UniqueName: \"kubernetes.io/projected/09ee933e-c443-4e73-95ea-87ea4c5a82d4-kube-api-access-fkph8\") pod \"coredns-7db6d8ff4d-bs7p9\" (UID: \"09ee933e-c443-4e73-95ea-87ea4c5a82d4\") " pod="kube-system/coredns-7db6d8ff4d-bs7p9" Jan 30 13:01:17.718201 systemd[1]: Created slice kubepods-burstable-pod09ee933e_c443_4e73_95ea_87ea4c5a82d4.slice - libcontainer container kubepods-burstable-pod09ee933e_c443_4e73_95ea_87ea4c5a82d4.slice. Jan 30 13:01:17.719808 kubelet[2555]: I0130 13:01:17.717737 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphd5\" (UniqueName: \"kubernetes.io/projected/29437a55-4f91-4be7-b561-40aba478f597-kube-api-access-sphd5\") pod \"calico-apiserver-788bf5f94c-wlb8t\" (UID: \"29437a55-4f91-4be7-b561-40aba478f597\") " pod="calico-apiserver/calico-apiserver-788bf5f94c-wlb8t" Jan 30 13:01:17.719808 kubelet[2555]: I0130 13:01:17.717756 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sth8\" (UniqueName: \"kubernetes.io/projected/7b4c1231-0ce0-43a4-b55a-08522cf916ab-kube-api-access-7sth8\") pod \"coredns-7db6d8ff4d-dj4pf\" (UID: \"7b4c1231-0ce0-43a4-b55a-08522cf916ab\") " pod="kube-system/coredns-7db6d8ff4d-dj4pf" Jan 30 13:01:17.719808 kubelet[2555]: I0130 13:01:17.717776 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9715b94b-ac1c-4ef7-884a-0cc0442ebce5-calico-apiserver-certs\") pod \"calico-apiserver-788bf5f94c-fbrwl\" (UID: \"9715b94b-ac1c-4ef7-884a-0cc0442ebce5\") " pod="calico-apiserver/calico-apiserver-788bf5f94c-fbrwl" Jan 30 13:01:17.719808 kubelet[2555]: I0130 13:01:17.717792 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfxd\" (UniqueName: \"kubernetes.io/projected/130f9fb6-465d-46d9-b55d-70f0e7e76a1d-kube-api-access-prfxd\") pod \"calico-kube-controllers-6cdf574769-hlwjk\" (UID: \"130f9fb6-465d-46d9-b55d-70f0e7e76a1d\") " pod="calico-system/calico-kube-controllers-6cdf574769-hlwjk" Jan 30 13:01:17.719808 kubelet[2555]: I0130 13:01:17.717813 2555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/130f9fb6-465d-46d9-b55d-70f0e7e76a1d-tigera-ca-bundle\") pod \"calico-kube-controllers-6cdf574769-hlwjk\" (UID: \"130f9fb6-465d-46d9-b55d-70f0e7e76a1d\") " pod="calico-system/calico-kube-controllers-6cdf574769-hlwjk" Jan 30 13:01:17.737543 systemd[1]: Created slice kubepods-besteffort-pod29437a55_4f91_4be7_b561_40aba478f597.slice - libcontainer container kubepods-besteffort-pod29437a55_4f91_4be7_b561_40aba478f597.slice. Jan 30 13:01:17.743562 systemd[1]: Created slice kubepods-burstable-pod7b4c1231_0ce0_43a4_b55a_08522cf916ab.slice - libcontainer container kubepods-burstable-pod7b4c1231_0ce0_43a4_b55a_08522cf916ab.slice. Jan 30 13:01:17.752494 systemd[1]: Created slice kubepods-besteffort-pod130f9fb6_465d_46d9_b55d_70f0e7e76a1d.slice - libcontainer container kubepods-besteffort-pod130f9fb6_465d_46d9_b55d_70f0e7e76a1d.slice. Jan 30 13:01:17.759868 systemd[1]: Created slice kubepods-besteffort-pod9715b94b_ac1c_4ef7_884a_0cc0442ebce5.slice - libcontainer container kubepods-besteffort-pod9715b94b_ac1c_4ef7_884a_0cc0442ebce5.slice. Jan 30 13:01:18.027258 kubelet[2555]: E0130 13:01:18.026661 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:18.027630 containerd[1449]: time="2025-01-30T13:01:18.027564525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs7p9,Uid:09ee933e-c443-4e73-95ea-87ea4c5a82d4,Namespace:kube-system,Attempt:0,}" Jan 30 13:01:18.041158 containerd[1449]: time="2025-01-30T13:01:18.041094198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-wlb8t,Uid:29437a55-4f91-4be7-b561-40aba478f597,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:01:18.047500 kubelet[2555]: E0130 13:01:18.047468 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:18.048049 containerd[1449]: time="2025-01-30T13:01:18.047997221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dj4pf,Uid:7b4c1231-0ce0-43a4-b55a-08522cf916ab,Namespace:kube-system,Attempt:0,}" Jan 30 13:01:18.060275 containerd[1449]: time="2025-01-30T13:01:18.057050377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf574769-hlwjk,Uid:130f9fb6-465d-46d9-b55d-70f0e7e76a1d,Namespace:calico-system,Attempt:0,}" Jan 30 13:01:18.063550 containerd[1449]: time="2025-01-30T13:01:18.063500900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-fbrwl,Uid:9715b94b-ac1c-4ef7-884a-0cc0442ebce5,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:01:18.111458 systemd[1]: Created slice kubepods-besteffort-podaa3d79e1_a896_409f_b82d_b2c0db403513.slice - libcontainer container kubepods-besteffort-podaa3d79e1_a896_409f_b82d_b2c0db403513.slice. Jan 30 13:01:18.116078 containerd[1449]: time="2025-01-30T13:01:18.116020802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-95cpz,Uid:aa3d79e1-a896-409f-b82d-b2c0db403513,Namespace:calico-system,Attempt:0,}" Jan 30 13:01:18.244351 kubelet[2555]: E0130 13:01:18.240832 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:18.245248 containerd[1449]: time="2025-01-30T13:01:18.242501065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:01:18.576604 containerd[1449]: time="2025-01-30T13:01:18.576539625Z" level=error msg="Failed to destroy network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.580976 containerd[1449]: time="2025-01-30T13:01:18.580912816Z" level=error msg="encountered an error cleaning up failed sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.581121 containerd[1449]: time="2025-01-30T13:01:18.580999860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-95cpz,Uid:aa3d79e1-a896-409f-b82d-b2c0db403513,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.585711 containerd[1449]: time="2025-01-30T13:01:18.585655424Z" level=error msg="Failed to destroy network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.586441 containerd[1449]: time="2025-01-30T13:01:18.586262011Z" level=error msg="encountered an error cleaning up failed sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.586441 containerd[1449]: time="2025-01-30T13:01:18.586326214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-wlb8t,Uid:29437a55-4f91-4be7-b561-40aba478f597,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.589056 kubelet[2555]: E0130 13:01:18.588999 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.589332 kubelet[2555]: E0130 13:01:18.589075 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.589332 kubelet[2555]: E0130 13:01:18.589140 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-95cpz" Jan 30 13:01:18.589332 kubelet[2555]: E0130 13:01:18.589171 2555 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-95cpz" Jan 30 13:01:18.589332 kubelet[2555]: E0130 13:01:18.589096 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-788bf5f94c-wlb8t" Jan 30 13:01:18.589560 kubelet[2555]: E0130 13:01:18.589220 2555 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-788bf5f94c-wlb8t" Jan 30 13:01:18.589560 kubelet[2555]: E0130 13:01:18.589228 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-95cpz_calico-system(aa3d79e1-a896-409f-b82d-b2c0db403513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-95cpz_calico-system(aa3d79e1-a896-409f-b82d-b2c0db403513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-95cpz" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" Jan 30 13:01:18.589560 kubelet[2555]: E0130 13:01:18.589260 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-788bf5f94c-wlb8t_calico-apiserver(29437a55-4f91-4be7-b561-40aba478f597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-788bf5f94c-wlb8t_calico-apiserver(29437a55-4f91-4be7-b561-40aba478f597)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-788bf5f94c-wlb8t" podUID="29437a55-4f91-4be7-b561-40aba478f597" Jan 30 13:01:18.589736 containerd[1449]: time="2025-01-30T13:01:18.589400668Z" level=error msg="Failed to destroy network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.590193 containerd[1449]: time="2025-01-30T13:01:18.590147701Z" level=error msg="encountered an error cleaning up failed sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.590239 containerd[1449]: time="2025-01-30T13:01:18.590220904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dj4pf,Uid:7b4c1231-0ce0-43a4-b55a-08522cf916ab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.590509 kubelet[2555]: E0130 13:01:18.590466 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.590557 kubelet[2555]: E0130 13:01:18.590533 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dj4pf" Jan 30 13:01:18.590594 kubelet[2555]: E0130 13:01:18.590553 2555 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dj4pf" Jan 30 13:01:18.590617 kubelet[2555]: E0130 13:01:18.590592 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dj4pf_kube-system(7b4c1231-0ce0-43a4-b55a-08522cf916ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dj4pf_kube-system(7b4c1231-0ce0-43a4-b55a-08522cf916ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dj4pf" podUID="7b4c1231-0ce0-43a4-b55a-08522cf916ab" Jan 30 13:01:18.593622 containerd[1449]: time="2025-01-30T13:01:18.593580611Z" level=error msg="Failed to destroy network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.593950 containerd[1449]: time="2025-01-30T13:01:18.593920986Z" level=error msg="encountered an error cleaning up failed sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.594007 containerd[1449]: time="2025-01-30T13:01:18.593976509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-fbrwl,Uid:9715b94b-ac1c-4ef7-884a-0cc0442ebce5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.594523 containerd[1449]: time="2025-01-30T13:01:18.594351605Z" level=error msg="Failed to destroy network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.594882 containerd[1449]: time="2025-01-30T13:01:18.594849147Z" level=error msg="encountered an error cleaning up failed sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.595012 kubelet[2555]: E0130 13:01:18.594962 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.595124 kubelet[2555]: E0130 13:01:18.595030 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-788bf5f94c-fbrwl" Jan 30 13:01:18.595124 kubelet[2555]: E0130 13:01:18.595051 2555 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-788bf5f94c-fbrwl" Jan 30 13:01:18.595124 kubelet[2555]: E0130 13:01:18.595096 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-788bf5f94c-fbrwl_calico-apiserver(9715b94b-ac1c-4ef7-884a-0cc0442ebce5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-788bf5f94c-fbrwl_calico-apiserver(9715b94b-ac1c-4ef7-884a-0cc0442ebce5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-788bf5f94c-fbrwl" podUID="9715b94b-ac1c-4ef7-884a-0cc0442ebce5" Jan 30 13:01:18.595257 containerd[1449]: time="2025-01-30T13:01:18.594994713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs7p9,Uid:09ee933e-c443-4e73-95ea-87ea4c5a82d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.595466 kubelet[2555]: E0130 13:01:18.595443 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.595537 kubelet[2555]: E0130 13:01:18.595472 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs7p9" Jan 30 13:01:18.595537 kubelet[2555]: E0130 13:01:18.595487 2555 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs7p9" Jan 30 13:01:18.595537 kubelet[2555]: E0130 13:01:18.595515 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bs7p9_kube-system(09ee933e-c443-4e73-95ea-87ea4c5a82d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bs7p9_kube-system(09ee933e-c443-4e73-95ea-87ea4c5a82d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bs7p9" podUID="09ee933e-c443-4e73-95ea-87ea4c5a82d4" Jan 30 13:01:18.596463 containerd[1449]: time="2025-01-30T13:01:18.596414376Z" level=error msg="Failed to destroy network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.596831 containerd[1449]: time="2025-01-30T13:01:18.596799553Z" level=error msg="encountered an error cleaning up failed sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.596925 containerd[1449]: time="2025-01-30T13:01:18.596860915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf574769-hlwjk,Uid:130f9fb6-465d-46d9-b55d-70f0e7e76a1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.597082 kubelet[2555]: E0130 13:01:18.597047 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:18.597130 kubelet[2555]: E0130 13:01:18.597096 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cdf574769-hlwjk" Jan 30 13:01:18.597130 kubelet[2555]: E0130 13:01:18.597120 2555 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cdf574769-hlwjk" Jan 30 13:01:18.597199 kubelet[2555]: E0130 13:01:18.597148 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cdf574769-hlwjk_calico-system(130f9fb6-465d-46d9-b55d-70f0e7e76a1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cdf574769-hlwjk_calico-system(130f9fb6-465d-46d9-b55d-70f0e7e76a1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cdf574769-hlwjk" podUID="130f9fb6-465d-46d9-b55d-70f0e7e76a1d" Jan 30 13:01:18.956456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8-shm.mount: Deactivated successfully. Jan 30 13:01:18.956558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73-shm.mount: Deactivated successfully. Jan 30 13:01:19.246967 kubelet[2555]: I0130 13:01:19.246844 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:19.249574 containerd[1449]: time="2025-01-30T13:01:19.249520987Z" level=info msg="StopPodSandbox for \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\"" Jan 30 13:01:19.249892 containerd[1449]: time="2025-01-30T13:01:19.249759917Z" level=info msg="Ensure that sandbox 9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68 in task-service has been cleanup successfully" Jan 30 13:01:19.255543 kubelet[2555]: I0130 13:01:19.255494 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:19.256363 containerd[1449]: time="2025-01-30T13:01:19.256326436Z" level=info msg="StopPodSandbox for \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\"" Jan 30 13:01:19.256763 containerd[1449]: time="2025-01-30T13:01:19.256718653Z" level=info msg="Ensure that sandbox 21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73 in task-service has been cleanup successfully" Jan 30 13:01:19.258901 kubelet[2555]: I0130 13:01:19.258867 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:19.260169 containerd[1449]: time="2025-01-30T13:01:19.260123278Z" level=info msg="StopPodSandbox for \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\"" Jan 30 13:01:19.260737 containerd[1449]: time="2025-01-30T13:01:19.260517494Z" level=info msg="Ensure that sandbox 19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2 in task-service has been cleanup successfully" Jan 30 13:01:19.261237 kubelet[2555]: I0130 13:01:19.261186 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:19.262111 containerd[1449]: time="2025-01-30T13:01:19.262074560Z" level=info msg="StopPodSandbox for \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\"" Jan 30 13:01:19.262285 containerd[1449]: time="2025-01-30T13:01:19.262262088Z" level=info msg="Ensure that sandbox 200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1 in task-service has been cleanup successfully" Jan 30 13:01:19.264653 kubelet[2555]: I0130 13:01:19.264616 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:19.266296 containerd[1449]: time="2025-01-30T13:01:19.265889963Z" level=info msg="StopPodSandbox for \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\"" Jan 30 13:01:19.266296 containerd[1449]: time="2025-01-30T13:01:19.266071290Z" level=info msg="Ensure that sandbox abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8 in task-service has been cleanup successfully" Jan 30 13:01:19.269191 kubelet[2555]: I0130 13:01:19.269151 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:19.278857 containerd[1449]: time="2025-01-30T13:01:19.278818672Z" level=info msg="StopPodSandbox for \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\"" Jan 30 13:01:19.279757 containerd[1449]: time="2025-01-30T13:01:19.279724190Z" level=info msg="Ensure that sandbox ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce in task-service has been cleanup successfully" Jan 30 13:01:19.311174 containerd[1449]: time="2025-01-30T13:01:19.311111204Z" level=error msg="StopPodSandbox for \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\" failed" error="failed to destroy network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:19.311564 kubelet[2555]: E0130 13:01:19.311515 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:19.311657 kubelet[2555]: E0130 13:01:19.311592 2555 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1"} Jan 30 13:01:19.311692 kubelet[2555]: E0130 13:01:19.311651 2555 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b4c1231-0ce0-43a4-b55a-08522cf916ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:01:19.311754 kubelet[2555]: E0130 13:01:19.311691 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b4c1231-0ce0-43a4-b55a-08522cf916ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dj4pf" podUID="7b4c1231-0ce0-43a4-b55a-08522cf916ab" Jan 30 13:01:19.318774 containerd[1449]: time="2025-01-30T13:01:19.318549560Z" level=error msg="StopPodSandbox for \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\" failed" error="failed to destroy network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:19.318936 kubelet[2555]: E0130 13:01:19.318803 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:19.318936 kubelet[2555]: E0130 13:01:19.318864 2555 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68"} Jan 30 13:01:19.318936 kubelet[2555]: E0130 13:01:19.318904 2555 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"130f9fb6-465d-46d9-b55d-70f0e7e76a1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:01:19.318936 kubelet[2555]: E0130 13:01:19.318928 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"130f9fb6-465d-46d9-b55d-70f0e7e76a1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cdf574769-hlwjk" podUID="130f9fb6-465d-46d9-b55d-70f0e7e76a1d" Jan 30 13:01:19.320941 containerd[1449]: time="2025-01-30T13:01:19.320121627Z" level=error msg="StopPodSandbox for \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\" failed" error="failed to destroy network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:19.321455 kubelet[2555]: E0130 13:01:19.320487 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:19.321543 kubelet[2555]: E0130 13:01:19.321458 2555 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2"} Jan 30 13:01:19.321543 kubelet[2555]: E0130 13:01:19.321523 2555 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa3d79e1-a896-409f-b82d-b2c0db403513\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:01:19.321627 kubelet[2555]: E0130 13:01:19.321547 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa3d79e1-a896-409f-b82d-b2c0db403513\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-95cpz" podUID="aa3d79e1-a896-409f-b82d-b2c0db403513" Jan 30 13:01:19.327175 containerd[1449]: time="2025-01-30T13:01:19.327123484Z" level=error msg="StopPodSandbox for \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\" failed" error="failed to destroy network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:19.327698 kubelet[2555]: E0130 13:01:19.327646 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:19.327784 kubelet[2555]: E0130 13:01:19.327702 2555 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73"} Jan 30 13:01:19.327784 kubelet[2555]: E0130 13:01:19.327761 2555 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09ee933e-c443-4e73-95ea-87ea4c5a82d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:01:19.327872 kubelet[2555]: E0130 13:01:19.327789 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09ee933e-c443-4e73-95ea-87ea4c5a82d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bs7p9" podUID="09ee933e-c443-4e73-95ea-87ea4c5a82d4" Jan 30 13:01:19.332710 containerd[1449]: time="2025-01-30T13:01:19.332658440Z" level=error msg="StopPodSandbox for \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\" failed" error="failed to destroy network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:19.333159 kubelet[2555]: E0130 13:01:19.333114 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:19.333257 kubelet[2555]: E0130 13:01:19.333173 2555 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8"} Jan 30 13:01:19.333257 kubelet[2555]: E0130 13:01:19.333218 2555 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29437a55-4f91-4be7-b561-40aba478f597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:01:19.333257 kubelet[2555]: E0130 13:01:19.333247 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29437a55-4f91-4be7-b561-40aba478f597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-788bf5f94c-wlb8t" podUID="29437a55-4f91-4be7-b561-40aba478f597" Jan 30 13:01:19.342560 containerd[1449]: time="2025-01-30T13:01:19.342485857Z" level=error msg="StopPodSandbox for \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\" failed" error="failed to destroy network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:01:19.343093 kubelet[2555]: E0130 13:01:19.342757 2555 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:19.343093 kubelet[2555]: E0130 13:01:19.342803 2555 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce"} Jan 30 13:01:19.343093 kubelet[2555]: E0130 13:01:19.342849 2555 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9715b94b-ac1c-4ef7-884a-0cc0442ebce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:01:19.343093 kubelet[2555]: E0130 13:01:19.342873 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9715b94b-ac1c-4ef7-884a-0cc0442ebce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-788bf5f94c-fbrwl" podUID="9715b94b-ac1c-4ef7-884a-0cc0442ebce5" Jan 30 13:01:21.402499 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:41686.service - OpenSSH per-connection server daemon (10.0.0.1:41686). Jan 30 13:01:21.461476 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 41686 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:21.462911 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:21.470656 systemd-logind[1427]: New session 9 of user core. Jan 30 13:01:21.477586 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:01:21.630241 sshd[3723]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:21.635231 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:41686.service: Deactivated successfully. Jan 30 13:01:21.639219 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:01:21.642154 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:01:21.643934 systemd-logind[1427]: Removed session 9. Jan 30 13:01:22.249200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317510216.mount: Deactivated successfully. Jan 30 13:01:22.474972 containerd[1449]: time="2025-01-30T13:01:22.474897731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 13:01:22.482250 containerd[1449]: time="2025-01-30T13:01:22.482191295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.239555185s" Jan 30 13:01:22.482250 containerd[1449]: time="2025-01-30T13:01:22.482245337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 13:01:22.487005 containerd[1449]: time="2025-01-30T13:01:22.486758913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:22.487683 containerd[1449]: time="2025-01-30T13:01:22.487630067Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:22.488319 containerd[1449]: time="2025-01-30T13:01:22.488276092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:22.495107 containerd[1449]: time="2025-01-30T13:01:22.495063837Z" level=info msg="CreateContainer within sandbox \"17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:01:22.528428 containerd[1449]: time="2025-01-30T13:01:22.528286611Z" level=info msg="CreateContainer within sandbox \"17dc56c2d765aadd268cf62f2d34713041d8bd05ee5a46c0b8863c7d3882ded1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8677a198f60badb14e40ecd31bc33280585b0a7b08e2fc37342e5c346ea00bde\"" Jan 30 13:01:22.529337 containerd[1449]: time="2025-01-30T13:01:22.529290010Z" level=info msg="StartContainer for \"8677a198f60badb14e40ecd31bc33280585b0a7b08e2fc37342e5c346ea00bde\"" Jan 30 13:01:22.587616 systemd[1]: Started cri-containerd-8677a198f60badb14e40ecd31bc33280585b0a7b08e2fc37342e5c346ea00bde.scope - libcontainer container 8677a198f60badb14e40ecd31bc33280585b0a7b08e2fc37342e5c346ea00bde. Jan 30 13:01:22.636743 containerd[1449]: time="2025-01-30T13:01:22.636697636Z" level=info msg="StartContainer for \"8677a198f60badb14e40ecd31bc33280585b0a7b08e2fc37342e5c346ea00bde\" returns successfully" Jan 30 13:01:22.854987 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:01:22.855115 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:01:23.280238 kubelet[2555]: E0130 13:01:23.280200 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:23.308689 kubelet[2555]: I0130 13:01:23.308614 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fcd9f" podStartSLOduration=1.825436629 podStartE2EDuration="15.308593859s" podCreationTimestamp="2025-01-30 13:01:08 +0000 UTC" firstStartedPulling="2025-01-30 13:01:08.999890618 +0000 UTC m=+25.047930884" lastFinishedPulling="2025-01-30 13:01:22.483047848 +0000 UTC m=+38.531088114" observedRunningTime="2025-01-30 13:01:23.304838756 +0000 UTC m=+39.352879102" watchObservedRunningTime="2025-01-30 13:01:23.308593859 +0000 UTC m=+39.356634125" Jan 30 13:01:24.281573 kubelet[2555]: I0130 13:01:24.280619 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:01:24.281573 kubelet[2555]: E0130 13:01:24.281364 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:26.644677 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:40022.service - OpenSSH per-connection server daemon (10.0.0.1:40022). Jan 30 13:01:26.701551 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 40022 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:26.704092 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:26.709147 systemd-logind[1427]: New session 10 of user core. Jan 30 13:01:26.715618 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:01:26.858083 sshd[3958]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:26.872882 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:40022.service: Deactivated successfully. Jan 30 13:01:26.876966 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:01:26.880557 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:01:26.891488 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:40024.service - OpenSSH per-connection server daemon (10.0.0.1:40024). Jan 30 13:01:26.893088 systemd-logind[1427]: Removed session 10. Jan 30 13:01:26.946032 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 40024 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:26.948402 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:26.957926 systemd-logind[1427]: New session 11 of user core. Jan 30 13:01:26.966232 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:01:27.139654 sshd[3977]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:27.156281 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:40024.service: Deactivated successfully. Jan 30 13:01:27.161894 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:01:27.167478 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:01:27.179173 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:40040.service - OpenSSH per-connection server daemon (10.0.0.1:40040). Jan 30 13:01:27.184321 systemd-logind[1427]: Removed session 11. Jan 30 13:01:27.227790 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 40040 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:27.229612 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:27.236121 systemd-logind[1427]: New session 12 of user core. Jan 30 13:01:27.241630 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:01:27.382569 sshd[3989]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:27.387486 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:40040.service: Deactivated successfully. Jan 30 13:01:27.391331 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:01:27.392486 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:01:27.394031 systemd-logind[1427]: Removed session 12. Jan 30 13:01:30.100915 containerd[1449]: time="2025-01-30T13:01:30.100855522Z" level=info msg="StopPodSandbox for \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\"" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.207 [INFO][4098] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.208 [INFO][4098] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" iface="eth0" netns="/var/run/netns/cni-f178c427-0cb0-8801-4a81-355a8b51a4a1" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.208 [INFO][4098] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" iface="eth0" netns="/var/run/netns/cni-f178c427-0cb0-8801-4a81-355a8b51a4a1" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.209 [INFO][4098] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" iface="eth0" netns="/var/run/netns/cni-f178c427-0cb0-8801-4a81-355a8b51a4a1" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.209 [INFO][4098] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.209 [INFO][4098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.314 [INFO][4106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.314 [INFO][4106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.314 [INFO][4106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.323 [WARNING][4106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.324 [INFO][4106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.325 [INFO][4106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:30.330212 containerd[1449]: 2025-01-30 13:01:30.327 [INFO][4098] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:30.331737 containerd[1449]: time="2025-01-30T13:01:30.330363058Z" level=info msg="TearDown network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\" successfully" Jan 30 13:01:30.331737 containerd[1449]: time="2025-01-30T13:01:30.330414540Z" level=info msg="StopPodSandbox for \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\" returns successfully" Jan 30 13:01:30.331737 containerd[1449]: time="2025-01-30T13:01:30.331098482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-wlb8t,Uid:29437a55-4f91-4be7-b561-40aba478f597,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:01:30.332692 systemd[1]: run-netns-cni\x2df178c427\x2d0cb0\x2d8801\x2d4a81\x2d355a8b51a4a1.mount: Deactivated successfully. Jan 30 13:01:30.476829 systemd-networkd[1376]: cali32d163507f5: Link UP Jan 30 13:01:30.477034 systemd-networkd[1376]: cali32d163507f5: Gained carrier Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.373 [INFO][4116] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.391 [INFO][4116] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0 calico-apiserver-788bf5f94c- calico-apiserver 29437a55-4f91-4be7-b561-40aba478f597 923 0 2025-01-30 13:01:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:788bf5f94c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-788bf5f94c-wlb8t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32d163507f5 [] []}} ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.391 [INFO][4116] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.423 [INFO][4130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" HandleID="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.434 [INFO][4130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" HandleID="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000295540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-788bf5f94c-wlb8t", "timestamp":"2025-01-30 13:01:30.423838078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.434 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.434 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.434 [INFO][4130] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.436 [INFO][4130] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.443 [INFO][4130] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.447 [INFO][4130] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.449 [INFO][4130] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.451 [INFO][4130] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.451 [INFO][4130] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.452 [INFO][4130] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1 Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.456 [INFO][4130] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.463 [INFO][4130] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.463 [INFO][4130] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" host="localhost" Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.463 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:30.499213 containerd[1449]: 2025-01-30 13:01:30.463 [INFO][4130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" HandleID="k8s-pod-network.6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.499988 containerd[1449]: 2025-01-30 13:01:30.466 [INFO][4116] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"29437a55-4f91-4be7-b561-40aba478f597", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-788bf5f94c-wlb8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32d163507f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:30.499988 containerd[1449]: 2025-01-30 13:01:30.467 [INFO][4116] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.499988 containerd[1449]: 2025-01-30 13:01:30.467 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32d163507f5 ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.499988 containerd[1449]: 2025-01-30 13:01:30.477 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.499988 containerd[1449]: 2025-01-30 13:01:30.478 [INFO][4116] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"29437a55-4f91-4be7-b561-40aba478f597", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1", Pod:"calico-apiserver-788bf5f94c-wlb8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32d163507f5", MAC:"4e:e3:ab:77:93:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:30.499988 containerd[1449]: 2025-01-30 13:01:30.495 [INFO][4116] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-wlb8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:30.526008 containerd[1449]: time="2025-01-30T13:01:30.525825694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:30.526008 containerd[1449]: time="2025-01-30T13:01:30.525949578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:30.526008 containerd[1449]: time="2025-01-30T13:01:30.525967498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:30.526325 containerd[1449]: time="2025-01-30T13:01:30.526243307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:30.562285 systemd[1]: Started cri-containerd-6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1.scope - libcontainer container 6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1. Jan 30 13:01:30.574215 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:01:30.599457 containerd[1449]: time="2025-01-30T13:01:30.599412631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-wlb8t,Uid:29437a55-4f91-4be7-b561-40aba478f597,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1\"" Jan 30 13:01:30.600992 containerd[1449]: time="2025-01-30T13:01:30.600960321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:01:31.099717 containerd[1449]: time="2025-01-30T13:01:31.099677134Z" level=info msg="StopPodSandbox for \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\"" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.145 [INFO][4228] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.146 [INFO][4228] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" iface="eth0" netns="/var/run/netns/cni-a79b118e-c147-df8f-421c-c4cda60de0a5" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.146 [INFO][4228] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" iface="eth0" netns="/var/run/netns/cni-a79b118e-c147-df8f-421c-c4cda60de0a5" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.146 [INFO][4228] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" iface="eth0" netns="/var/run/netns/cni-a79b118e-c147-df8f-421c-c4cda60de0a5" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.146 [INFO][4228] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.146 [INFO][4228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.168 [INFO][4235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.169 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.169 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.180 [WARNING][4235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.180 [INFO][4235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.184 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:31.188088 containerd[1449]: 2025-01-30 13:01:31.186 [INFO][4228] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:31.189062 containerd[1449]: time="2025-01-30T13:01:31.188216261Z" level=info msg="TearDown network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\" successfully" Jan 30 13:01:31.189062 containerd[1449]: time="2025-01-30T13:01:31.188243422Z" level=info msg="StopPodSandbox for \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\" returns successfully" Jan 30 13:01:31.189123 kubelet[2555]: E0130 13:01:31.188595 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:31.189768 containerd[1449]: time="2025-01-30T13:01:31.189729149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs7p9,Uid:09ee933e-c443-4e73-95ea-87ea4c5a82d4,Namespace:kube-system,Attempt:1,}" Jan 30 13:01:31.309545 systemd-networkd[1376]: cali66d40de56a1: Link UP Jan 30 13:01:31.310018 systemd-networkd[1376]: cali66d40de56a1: Gained carrier Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.222 [INFO][4244] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.236 [INFO][4244] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0 coredns-7db6d8ff4d- kube-system 09ee933e-c443-4e73-95ea-87ea4c5a82d4 930 0 2025-01-30 13:00:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-bs7p9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66d40de56a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.236 [INFO][4244] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.262 [INFO][4258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" HandleID="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.273 [INFO][4258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" HandleID="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002930f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-bs7p9", "timestamp":"2025-01-30 13:01:31.262835506 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.273 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.273 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.273 [INFO][4258] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.278 [INFO][4258] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.282 [INFO][4258] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.289 [INFO][4258] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.291 [INFO][4258] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.293 [INFO][4258] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.293 [INFO][4258] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.295 [INFO][4258] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.299 [INFO][4258] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.304 [INFO][4258] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.304 [INFO][4258] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" host="localhost" Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.304 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:31.325231 containerd[1449]: 2025-01-30 13:01:31.304 [INFO][4258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" HandleID="k8s-pod-network.934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.325856 containerd[1449]: 2025-01-30 13:01:31.306 [INFO][4244] cni-plugin/k8s.go 386: Populated endpoint ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"09ee933e-c443-4e73-95ea-87ea4c5a82d4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-bs7p9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d40de56a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:31.325856 containerd[1449]: 2025-01-30 13:01:31.306 [INFO][4244] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.325856 containerd[1449]: 2025-01-30 13:01:31.306 [INFO][4244] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66d40de56a1 ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.325856 containerd[1449]: 2025-01-30 13:01:31.309 [INFO][4244] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.325856 containerd[1449]: 2025-01-30 13:01:31.310 [INFO][4244] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"09ee933e-c443-4e73-95ea-87ea4c5a82d4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a", Pod:"coredns-7db6d8ff4d-bs7p9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d40de56a1", MAC:"46:c7:bc:20:1f:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:31.325856 containerd[1449]: 2025-01-30 13:01:31.321 [INFO][4244] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs7p9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:31.334706 systemd[1]: run-netns-cni\x2da79b118e\x2dc147\x2ddf8f\x2d421c\x2dc4cda60de0a5.mount: Deactivated successfully. Jan 30 13:01:31.348899 containerd[1449]: time="2025-01-30T13:01:31.348558583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:31.348899 containerd[1449]: time="2025-01-30T13:01:31.348899594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:31.348899 containerd[1449]: time="2025-01-30T13:01:31.348917394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:31.349127 containerd[1449]: time="2025-01-30T13:01:31.349018638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:31.372563 systemd[1]: Started cri-containerd-934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a.scope - libcontainer container 934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a. Jan 30 13:01:31.382535 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:01:31.406976 containerd[1449]: time="2025-01-30T13:01:31.406931713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs7p9,Uid:09ee933e-c443-4e73-95ea-87ea4c5a82d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a\"" Jan 30 13:01:31.407901 kubelet[2555]: E0130 13:01:31.407871 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:31.410692 containerd[1449]: time="2025-01-30T13:01:31.410641391Z" level=info msg="CreateContainer within sandbox \"934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:01:31.426794 containerd[1449]: time="2025-01-30T13:01:31.426744301Z" level=info msg="CreateContainer within sandbox \"934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"811a0aa89a3cef1263461a4a605a48d58b88b53a68e5796de4a1601d87017b6c\"" Jan 30 13:01:31.427280 containerd[1449]: time="2025-01-30T13:01:31.427255477Z" level=info msg="StartContainer for \"811a0aa89a3cef1263461a4a605a48d58b88b53a68e5796de4a1601d87017b6c\"" Jan 30 13:01:31.458565 systemd[1]: Started cri-containerd-811a0aa89a3cef1263461a4a605a48d58b88b53a68e5796de4a1601d87017b6c.scope - libcontainer container 811a0aa89a3cef1263461a4a605a48d58b88b53a68e5796de4a1601d87017b6c. Jan 30 13:01:31.490893 containerd[1449]: time="2025-01-30T13:01:31.490841213Z" level=info msg="StartContainer for \"811a0aa89a3cef1263461a4a605a48d58b88b53a68e5796de4a1601d87017b6c\" returns successfully" Jan 30 13:01:31.593546 systemd-networkd[1376]: cali32d163507f5: Gained IPv6LL Jan 30 13:01:32.100198 containerd[1449]: time="2025-01-30T13:01:32.100151788Z" level=info msg="StopPodSandbox for \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\"" Jan 30 13:01:32.100488 containerd[1449]: time="2025-01-30T13:01:32.100295793Z" level=info msg="StopPodSandbox for \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\"" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.172 [INFO][4411] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.172 [INFO][4411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" iface="eth0" netns="/var/run/netns/cni-3c4afe3d-0a74-b95b-9c71-d4780db6f39d" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.172 [INFO][4411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" iface="eth0" netns="/var/run/netns/cni-3c4afe3d-0a74-b95b-9c71-d4780db6f39d" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.172 [INFO][4411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" iface="eth0" netns="/var/run/netns/cni-3c4afe3d-0a74-b95b-9c71-d4780db6f39d" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.172 [INFO][4411] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.172 [INFO][4411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.196 [INFO][4426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.196 [INFO][4426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.196 [INFO][4426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.215 [WARNING][4426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.216 [INFO][4426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.220 [INFO][4426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:32.225513 containerd[1449]: 2025-01-30 13:01:32.223 [INFO][4411] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:32.226248 containerd[1449]: time="2025-01-30T13:01:32.225668134Z" level=info msg="TearDown network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\" successfully" Jan 30 13:01:32.226248 containerd[1449]: time="2025-01-30T13:01:32.225713976Z" level=info msg="StopPodSandbox for \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\" returns successfully" Jan 30 13:01:32.229409 kubelet[2555]: E0130 13:01:32.226676 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:32.230171 containerd[1449]: time="2025-01-30T13:01:32.230120073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dj4pf,Uid:7b4c1231-0ce0-43a4-b55a-08522cf916ab,Namespace:kube-system,Attempt:1,}" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.215 [INFO][4412] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.215 [INFO][4412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" iface="eth0" netns="/var/run/netns/cni-c7b5a762-c1ea-4a5c-0b1a-0eca35ddf9be" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.215 [INFO][4412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" iface="eth0" netns="/var/run/netns/cni-c7b5a762-c1ea-4a5c-0b1a-0eca35ddf9be" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.215 [INFO][4412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" iface="eth0" netns="/var/run/netns/cni-c7b5a762-c1ea-4a5c-0b1a-0eca35ddf9be" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.215 [INFO][4412] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.215 [INFO][4412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.239 [INFO][4434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.239 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.239 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.251 [WARNING][4434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.251 [INFO][4434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.255 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:32.260547 containerd[1449]: 2025-01-30 13:01:32.256 [INFO][4412] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:32.262039 containerd[1449]: time="2025-01-30T13:01:32.261992585Z" level=info msg="TearDown network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\" successfully" Jan 30 13:01:32.262039 containerd[1449]: time="2025-01-30T13:01:32.262027386Z" level=info msg="StopPodSandbox for \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\" returns successfully" Jan 30 13:01:32.263932 containerd[1449]: time="2025-01-30T13:01:32.263891764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-fbrwl,Uid:9715b94b-ac1c-4ef7-884a-0cc0442ebce5,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:01:32.304229 kubelet[2555]: E0130 13:01:32.304180 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:32.327910 kubelet[2555]: I0130 13:01:32.327850 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bs7p9" podStartSLOduration=35.327830553 podStartE2EDuration="35.327830553s" podCreationTimestamp="2025-01-30 13:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:01:32.323843029 +0000 UTC m=+48.371883295" watchObservedRunningTime="2025-01-30 13:01:32.327830553 +0000 UTC m=+48.375870779" Jan 30 13:01:32.337359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193855931.mount: Deactivated successfully. Jan 30 13:01:32.337556 systemd[1]: run-netns-cni\x2dc7b5a762\x2dc1ea\x2d4a5c\x2d0b1a\x2d0eca35ddf9be.mount: Deactivated successfully. Jan 30 13:01:32.337665 systemd[1]: run-netns-cni\x2d3c4afe3d\x2d0a74\x2db95b\x2d9c71\x2dd4780db6f39d.mount: Deactivated successfully. Jan 30 13:01:32.405796 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:40048.service - OpenSSH per-connection server daemon (10.0.0.1:40048). Jan 30 13:01:32.527545 systemd-networkd[1376]: cali806d440a8c7: Link UP Jan 30 13:01:32.529043 systemd-networkd[1376]: cali806d440a8c7: Gained carrier Jan 30 13:01:32.550595 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 40048 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.306 [INFO][4457] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.326 [INFO][4457] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0 calico-apiserver-788bf5f94c- calico-apiserver 9715b94b-ac1c-4ef7-884a-0cc0442ebce5 945 0 2025-01-30 13:01:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:788bf5f94c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-788bf5f94c-fbrwl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali806d440a8c7 [] []}} ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.326 [INFO][4457] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.396 [INFO][4474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" HandleID="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.431 [INFO][4474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" HandleID="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a9560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-788bf5f94c-fbrwl", "timestamp":"2025-01-30 13:01:32.396948984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.431 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.431 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.432 [INFO][4474] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.434 [INFO][4474] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.440 [INFO][4474] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.447 [INFO][4474] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.449 [INFO][4474] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.452 [INFO][4474] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.452 [INFO][4474] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.453 [INFO][4474] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8 Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.464 [INFO][4474] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.514 [INFO][4474] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.515 [INFO][4474] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" host="localhost" Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.515 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:32.552635 containerd[1449]: 2025-01-30 13:01:32.515 [INFO][4474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" HandleID="k8s-pod-network.0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.554186 containerd[1449]: 2025-01-30 13:01:32.519 [INFO][4457] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9715b94b-ac1c-4ef7-884a-0cc0442ebce5", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-788bf5f94c-fbrwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali806d440a8c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:32.554186 containerd[1449]: 2025-01-30 13:01:32.519 [INFO][4457] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.554186 containerd[1449]: 2025-01-30 13:01:32.520 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali806d440a8c7 ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.554186 containerd[1449]: 2025-01-30 13:01:32.531 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.554186 containerd[1449]: 2025-01-30 13:01:32.532 [INFO][4457] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9715b94b-ac1c-4ef7-884a-0cc0442ebce5", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8", Pod:"calico-apiserver-788bf5f94c-fbrwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali806d440a8c7", MAC:"ca:76:ec:2e:06:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:32.554186 containerd[1449]: 2025-01-30 13:01:32.549 [INFO][4457] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8" Namespace="calico-apiserver" Pod="calico-apiserver-788bf5f94c-fbrwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:32.553704 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:32.561935 systemd-logind[1427]: New session 13 of user core. Jan 30 13:01:32.566540 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:01:32.574602 systemd-networkd[1376]: calie22cc9610c5: Link UP Jan 30 13:01:32.574761 systemd-networkd[1376]: calie22cc9610c5: Gained carrier Jan 30 13:01:32.582679 containerd[1449]: time="2025-01-30T13:01:32.582363234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:32.582679 containerd[1449]: time="2025-01-30T13:01:32.582659604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:32.582679 containerd[1449]: time="2025-01-30T13:01:32.582671324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:32.582854 containerd[1449]: time="2025-01-30T13:01:32.582779247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.279 [INFO][4443] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.308 [INFO][4443] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0 coredns-7db6d8ff4d- kube-system 7b4c1231-0ce0-43a4-b55a-08522cf916ab 944 0 2025-01-30 13:00:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-dj4pf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie22cc9610c5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.308 [INFO][4443] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.397 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" HandleID="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.430 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" HandleID="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-dj4pf", "timestamp":"2025-01-30 13:01:32.397590564 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.431 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.515 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.515 [INFO][4470] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.521 [INFO][4470] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.527 [INFO][4470] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.537 [INFO][4470] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.541 [INFO][4470] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.549 [INFO][4470] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.549 [INFO][4470] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.552 [INFO][4470] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.559 [INFO][4470] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.566 [INFO][4470] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.566 [INFO][4470] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" host="localhost" Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.566 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:32.591668 containerd[1449]: 2025-01-30 13:01:32.566 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" HandleID="k8s-pod-network.c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.592213 containerd[1449]: 2025-01-30 13:01:32.570 [INFO][4443] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b4c1231-0ce0-43a4-b55a-08522cf916ab", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-dj4pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie22cc9610c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:32.592213 containerd[1449]: 2025-01-30 13:01:32.572 [INFO][4443] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.592213 containerd[1449]: 2025-01-30 13:01:32.572 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie22cc9610c5 ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.592213 containerd[1449]: 2025-01-30 13:01:32.574 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.592213 containerd[1449]: 2025-01-30 13:01:32.575 [INFO][4443] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b4c1231-0ce0-43a4-b55a-08522cf916ab", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd", Pod:"coredns-7db6d8ff4d-dj4pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie22cc9610c5", MAC:"3e:22:a7:9c:6a:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:32.592213 containerd[1449]: 2025-01-30 13:01:32.586 [INFO][4443] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dj4pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:32.611574 systemd[1]: Started cri-containerd-0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8.scope - libcontainer container 0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8. Jan 30 13:01:32.626163 containerd[1449]: time="2025-01-30T13:01:32.625195167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:32.626163 containerd[1449]: time="2025-01-30T13:01:32.625903429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:32.626163 containerd[1449]: time="2025-01-30T13:01:32.625924670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:32.626163 containerd[1449]: time="2025-01-30T13:01:32.626016753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:32.629415 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:01:32.654598 systemd[1]: Started cri-containerd-c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd.scope - libcontainer container c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd. Jan 30 13:01:32.670784 containerd[1449]: time="2025-01-30T13:01:32.670168447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-788bf5f94c-fbrwl,Uid:9715b94b-ac1c-4ef7-884a-0cc0442ebce5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8\"" Jan 30 13:01:32.677667 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:01:32.702657 containerd[1449]: time="2025-01-30T13:01:32.702599896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dj4pf,Uid:7b4c1231-0ce0-43a4-b55a-08522cf916ab,Namespace:kube-system,Attempt:1,} returns sandbox id \"c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd\"" Jan 30 13:01:32.704085 kubelet[2555]: E0130 13:01:32.703508 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:32.707104 containerd[1449]: time="2025-01-30T13:01:32.707065075Z" level=info msg="CreateContainer within sandbox \"c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:01:32.745446 containerd[1449]: time="2025-01-30T13:01:32.744584442Z" level=info msg="CreateContainer within sandbox \"c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b9e4d904d55a1cb5b0801d74a28c33bc57a4fd392b7b1821940657767b8d5f9\"" Jan 30 13:01:32.746429 containerd[1449]: time="2025-01-30T13:01:32.746352657Z" level=info msg="StartContainer for \"9b9e4d904d55a1cb5b0801d74a28c33bc57a4fd392b7b1821940657767b8d5f9\"" Jan 30 13:01:32.782088 systemd[1]: Started cri-containerd-9b9e4d904d55a1cb5b0801d74a28c33bc57a4fd392b7b1821940657767b8d5f9.scope - libcontainer container 9b9e4d904d55a1cb5b0801d74a28c33bc57a4fd392b7b1821940657767b8d5f9. Jan 30 13:01:32.795077 sshd[4488]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:32.800888 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:40048.service: Deactivated successfully. Jan 30 13:01:32.803899 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:01:32.804930 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:01:32.806362 systemd-logind[1427]: Removed session 13. Jan 30 13:01:32.847134 containerd[1449]: time="2025-01-30T13:01:32.847078352Z" level=info msg="StartContainer for \"9b9e4d904d55a1cb5b0801d74a28c33bc57a4fd392b7b1821940657767b8d5f9\" returns successfully" Jan 30 13:01:32.938493 systemd-networkd[1376]: cali66d40de56a1: Gained IPv6LL Jan 30 13:01:33.167598 containerd[1449]: time="2025-01-30T13:01:33.167551234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:33.168133 containerd[1449]: time="2025-01-30T13:01:33.168099411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 13:01:33.169086 containerd[1449]: time="2025-01-30T13:01:33.169050760Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:33.171067 containerd[1449]: time="2025-01-30T13:01:33.171020580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:33.172055 containerd[1449]: time="2025-01-30T13:01:33.172014851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.570874243s" Jan 30 13:01:33.172055 containerd[1449]: time="2025-01-30T13:01:33.172055492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 13:01:33.173480 containerd[1449]: time="2025-01-30T13:01:33.173198687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:01:33.176415 containerd[1449]: time="2025-01-30T13:01:33.175561319Z" level=info msg="CreateContainer within sandbox \"6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:01:33.184418 containerd[1449]: time="2025-01-30T13:01:33.184346588Z" level=info msg="CreateContainer within sandbox \"6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b7d6d514eb366eae31bb9fc02fca9aa3b6bf76bb991b6b3d34bfd8f28ed4eeed\"" Jan 30 13:01:33.184887 containerd[1449]: time="2025-01-30T13:01:33.184856364Z" level=info msg="StartContainer for \"b7d6d514eb366eae31bb9fc02fca9aa3b6bf76bb991b6b3d34bfd8f28ed4eeed\"" Jan 30 13:01:33.215593 systemd[1]: Started cri-containerd-b7d6d514eb366eae31bb9fc02fca9aa3b6bf76bb991b6b3d34bfd8f28ed4eeed.scope - libcontainer container b7d6d514eb366eae31bb9fc02fca9aa3b6bf76bb991b6b3d34bfd8f28ed4eeed. Jan 30 13:01:33.283527 kubelet[2555]: I0130 13:01:33.283480 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:01:33.284336 kubelet[2555]: E0130 13:01:33.284295 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:33.334332 containerd[1449]: time="2025-01-30T13:01:33.334283693Z" level=info msg="StartContainer for \"b7d6d514eb366eae31bb9fc02fca9aa3b6bf76bb991b6b3d34bfd8f28ed4eeed\" returns successfully" Jan 30 13:01:33.341881 kubelet[2555]: E0130 13:01:33.341695 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:33.356510 kubelet[2555]: E0130 13:01:33.353932 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:33.370031 kubelet[2555]: I0130 13:01:33.369969 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dj4pf" podStartSLOduration=36.369949503 podStartE2EDuration="36.369949503s" podCreationTimestamp="2025-01-30 13:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:01:33.368405976 +0000 UTC m=+49.416446322" watchObservedRunningTime="2025-01-30 13:01:33.369949503 +0000 UTC m=+49.417989769" Jan 30 13:01:33.394751 kubelet[2555]: I0130 13:01:33.394619 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-788bf5f94c-wlb8t" podStartSLOduration=23.822305209 podStartE2EDuration="26.394593217s" podCreationTimestamp="2025-01-30 13:01:07 +0000 UTC" firstStartedPulling="2025-01-30 13:01:30.600737914 +0000 UTC m=+46.648778180" lastFinishedPulling="2025-01-30 13:01:33.173025922 +0000 UTC m=+49.221066188" observedRunningTime="2025-01-30 13:01:33.392649398 +0000 UTC m=+49.440689664" watchObservedRunningTime="2025-01-30 13:01:33.394593217 +0000 UTC m=+49.442633483" Jan 30 13:01:33.437589 containerd[1449]: time="2025-01-30T13:01:33.437522570Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:33.441841 containerd[1449]: time="2025-01-30T13:01:33.441278004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:01:33.446403 containerd[1449]: time="2025-01-30T13:01:33.446337759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 273.06991ms" Jan 30 13:01:33.446403 containerd[1449]: time="2025-01-30T13:01:33.446406321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 13:01:33.448594 containerd[1449]: time="2025-01-30T13:01:33.448559547Z" level=info msg="CreateContainer within sandbox \"0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:01:33.464460 containerd[1449]: time="2025-01-30T13:01:33.464113063Z" level=info msg="CreateContainer within sandbox \"0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7dfa28517ad967adacf6485ce8d022570abab15b874199dcf67900331755b65a\"" Jan 30 13:01:33.467878 containerd[1449]: time="2025-01-30T13:01:33.467771015Z" level=info msg="StartContainer for \"7dfa28517ad967adacf6485ce8d022570abab15b874199dcf67900331755b65a\"" Jan 30 13:01:33.501560 systemd[1]: Started cri-containerd-7dfa28517ad967adacf6485ce8d022570abab15b874199dcf67900331755b65a.scope - libcontainer container 7dfa28517ad967adacf6485ce8d022570abab15b874199dcf67900331755b65a. Jan 30 13:01:33.555734 containerd[1449]: time="2025-01-30T13:01:33.555679303Z" level=info msg="StartContainer for \"7dfa28517ad967adacf6485ce8d022570abab15b874199dcf67900331755b65a\" returns successfully" Jan 30 13:01:33.642448 systemd-networkd[1376]: cali806d440a8c7: Gained IPv6LL Jan 30 13:01:33.769551 systemd-networkd[1376]: calie22cc9610c5: Gained IPv6LL Jan 30 13:01:34.100790 containerd[1449]: time="2025-01-30T13:01:34.100668237Z" level=info msg="StopPodSandbox for \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\"" Jan 30 13:01:34.102391 containerd[1449]: time="2025-01-30T13:01:34.102158042Z" level=info msg="StopPodSandbox for \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\"" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.164 [INFO][4869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.164 [INFO][4869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" iface="eth0" netns="/var/run/netns/cni-4ea671ff-0fbb-31e5-0fbc-4abbe8a435b8" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.164 [INFO][4869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" iface="eth0" netns="/var/run/netns/cni-4ea671ff-0fbb-31e5-0fbc-4abbe8a435b8" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.164 [INFO][4869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" iface="eth0" netns="/var/run/netns/cni-4ea671ff-0fbb-31e5-0fbc-4abbe8a435b8" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.164 [INFO][4869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.164 [INFO][4869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.225 [INFO][4884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.225 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.225 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.243 [WARNING][4884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.243 [INFO][4884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.245 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:34.253250 containerd[1449]: 2025-01-30 13:01:34.247 [INFO][4869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:34.254162 containerd[1449]: time="2025-01-30T13:01:34.253439911Z" level=info msg="TearDown network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\" successfully" Jan 30 13:01:34.254162 containerd[1449]: time="2025-01-30T13:01:34.253467912Z" level=info msg="StopPodSandbox for \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\" returns successfully" Jan 30 13:01:34.254888 containerd[1449]: time="2025-01-30T13:01:34.254530584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-95cpz,Uid:aa3d79e1-a896-409f-b82d-b2c0db403513,Namespace:calico-system,Attempt:1,}" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.158 [INFO][4860] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.158 [INFO][4860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" iface="eth0" netns="/var/run/netns/cni-eeff78be-921a-e4d8-0697-241f0a0892e8" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.159 [INFO][4860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" iface="eth0" netns="/var/run/netns/cni-eeff78be-921a-e4d8-0697-241f0a0892e8" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.159 [INFO][4860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" iface="eth0" netns="/var/run/netns/cni-eeff78be-921a-e4d8-0697-241f0a0892e8" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.159 [INFO][4860] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.159 [INFO][4860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.225 [INFO][4879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.225 [INFO][4879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.245 [INFO][4879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.272 [WARNING][4879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.272 [INFO][4879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.277 [INFO][4879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:34.280362 containerd[1449]: 2025-01-30 13:01:34.278 [INFO][4860] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:34.282334 containerd[1449]: time="2025-01-30T13:01:34.281361071Z" level=info msg="TearDown network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\" successfully" Jan 30 13:01:34.282334 containerd[1449]: time="2025-01-30T13:01:34.281438433Z" level=info msg="StopPodSandbox for \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\" returns successfully" Jan 30 13:01:34.283090 containerd[1449]: time="2025-01-30T13:01:34.282908237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf574769-hlwjk,Uid:130f9fb6-465d-46d9-b55d-70f0e7e76a1d,Namespace:calico-system,Attempt:1,}" Jan 30 13:01:34.337047 systemd[1]: run-netns-cni\x2d4ea671ff\x2d0fbb\x2d31e5\x2d0fbc\x2d4abbe8a435b8.mount: Deactivated successfully. Jan 30 13:01:34.337137 systemd[1]: run-netns-cni\x2deeff78be\x2d921a\x2de4d8\x2d0697\x2d241f0a0892e8.mount: Deactivated successfully. Jan 30 13:01:34.359187 kubelet[2555]: I0130 13:01:34.359055 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:01:34.360616 kubelet[2555]: E0130 13:01:34.360499 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:34.361220 kubelet[2555]: E0130 13:01:34.361192 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:34.384644 kubelet[2555]: I0130 13:01:34.384576 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-788bf5f94c-fbrwl" podStartSLOduration=26.61090218 podStartE2EDuration="27.384556414s" podCreationTimestamp="2025-01-30 13:01:07 +0000 UTC" firstStartedPulling="2025-01-30 13:01:32.673462669 +0000 UTC m=+48.721502935" lastFinishedPulling="2025-01-30 13:01:33.447116943 +0000 UTC m=+49.495157169" observedRunningTime="2025-01-30 13:01:34.382153862 +0000 UTC m=+50.430194128" watchObservedRunningTime="2025-01-30 13:01:34.384556414 +0000 UTC m=+50.432596640" Jan 30 13:01:34.495224 systemd-networkd[1376]: cali545efb5695f: Link UP Jan 30 13:01:34.495643 systemd-networkd[1376]: cali545efb5695f: Gained carrier Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.344 [INFO][4899] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.365 [INFO][4899] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--95cpz-eth0 csi-node-driver- calico-system aa3d79e1-a896-409f-b82d-b2c0db403513 1002 0 2025-01-30 13:01:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-95cpz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali545efb5695f [] []}} ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.365 [INFO][4899] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.411 [INFO][4929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" HandleID="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.438 [INFO][4929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" HandleID="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000312ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-95cpz", "timestamp":"2025-01-30 13:01:34.411185095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.438 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.438 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.438 [INFO][4929] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.440 [INFO][4929] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.450 [INFO][4929] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.463 [INFO][4929] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.469 [INFO][4929] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.472 [INFO][4929] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.472 [INFO][4929] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.474 [INFO][4929] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1 Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.480 [INFO][4929] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.487 [INFO][4929] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.487 [INFO][4929] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" host="localhost" Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.487 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:34.515793 containerd[1449]: 2025-01-30 13:01:34.487 [INFO][4929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" HandleID="k8s-pod-network.ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.516589 containerd[1449]: 2025-01-30 13:01:34.491 [INFO][4899] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--95cpz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa3d79e1-a896-409f-b82d-b2c0db403513", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-95cpz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali545efb5695f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:34.516589 containerd[1449]: 2025-01-30 13:01:34.491 [INFO][4899] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.516589 containerd[1449]: 2025-01-30 13:01:34.491 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali545efb5695f ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.516589 containerd[1449]: 2025-01-30 13:01:34.498 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.516589 containerd[1449]: 2025-01-30 13:01:34.499 [INFO][4899] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--95cpz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa3d79e1-a896-409f-b82d-b2c0db403513", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1", Pod:"csi-node-driver-95cpz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali545efb5695f", MAC:"0a:07:26:13:da:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:34.516589 containerd[1449]: 2025-01-30 13:01:34.513 [INFO][4899] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1" Namespace="calico-system" Pod="csi-node-driver-95cpz" WorkloadEndpoint="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:34.542699 systemd-networkd[1376]: calic26892fb2ef: Link UP Jan 30 13:01:34.544599 containerd[1449]: time="2025-01-30T13:01:34.542963857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:34.544599 containerd[1449]: time="2025-01-30T13:01:34.543022779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:34.544599 containerd[1449]: time="2025-01-30T13:01:34.543039260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:34.544599 containerd[1449]: time="2025-01-30T13:01:34.543112782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:34.543225 systemd-networkd[1376]: calic26892fb2ef: Gained carrier Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.362 [INFO][4911] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.389 [INFO][4911] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0 calico-kube-controllers-6cdf574769- calico-system 130f9fb6-465d-46d9-b55d-70f0e7e76a1d 1001 0 2025-01-30 13:01:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cdf574769 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6cdf574769-hlwjk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic26892fb2ef [] []}} ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.394 [INFO][4911] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.456 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" HandleID="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.472 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" HandleID="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cce0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6cdf574769-hlwjk", "timestamp":"2025-01-30 13:01:34.456059964 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.472 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.487 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.487 [INFO][4940] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.490 [INFO][4940] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.500 [INFO][4940] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.508 [INFO][4940] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.517 [INFO][4940] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.520 [INFO][4940] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.520 [INFO][4940] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.523 [INFO][4940] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3 Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.527 [INFO][4940] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.534 [INFO][4940] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.534 [INFO][4940] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" host="localhost" Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.534 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:34.561353 containerd[1449]: 2025-01-30 13:01:34.534 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" HandleID="k8s-pod-network.cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.561990 containerd[1449]: 2025-01-30 13:01:34.537 [INFO][4911] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0", GenerateName:"calico-kube-controllers-6cdf574769-", Namespace:"calico-system", SelfLink:"", UID:"130f9fb6-465d-46d9-b55d-70f0e7e76a1d", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cdf574769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6cdf574769-hlwjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic26892fb2ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:34.561990 containerd[1449]: 2025-01-30 13:01:34.537 [INFO][4911] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.561990 containerd[1449]: 2025-01-30 13:01:34.537 [INFO][4911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic26892fb2ef ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.561990 containerd[1449]: 2025-01-30 13:01:34.541 [INFO][4911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.561990 containerd[1449]: 2025-01-30 13:01:34.542 [INFO][4911] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0", GenerateName:"calico-kube-controllers-6cdf574769-", Namespace:"calico-system", SelfLink:"", UID:"130f9fb6-465d-46d9-b55d-70f0e7e76a1d", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cdf574769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3", Pod:"calico-kube-controllers-6cdf574769-hlwjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic26892fb2ef", MAC:"7a:f9:57:9a:64:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:34.561990 containerd[1449]: 2025-01-30 13:01:34.555 [INFO][4911] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3" Namespace="calico-system" Pod="calico-kube-controllers-6cdf574769-hlwjk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:34.585603 systemd[1]: Started cri-containerd-ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1.scope - libcontainer container ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1. Jan 30 13:01:34.593653 containerd[1449]: time="2025-01-30T13:01:34.593527138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:01:34.593768 containerd[1449]: time="2025-01-30T13:01:34.593655902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:01:34.593768 containerd[1449]: time="2025-01-30T13:01:34.593703423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:34.593862 containerd[1449]: time="2025-01-30T13:01:34.593823507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:01:34.625598 systemd[1]: Started cri-containerd-cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3.scope - libcontainer container cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3. Jan 30 13:01:34.643118 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:01:34.670323 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:01:34.674574 containerd[1449]: time="2025-01-30T13:01:34.674517293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf574769-hlwjk,Uid:130f9fb6-465d-46d9-b55d-70f0e7e76a1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3\"" Jan 30 13:01:34.676620 containerd[1449]: time="2025-01-30T13:01:34.676391030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:01:34.690337 containerd[1449]: time="2025-01-30T13:01:34.690297088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-95cpz,Uid:aa3d79e1-a896-409f-b82d-b2c0db403513,Namespace:calico-system,Attempt:1,} returns sandbox id \"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1\"" Jan 30 13:01:35.202418 kubelet[2555]: I0130 13:01:35.202140 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:01:35.202845 kubelet[2555]: E0130 13:01:35.202812 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:35.365762 kubelet[2555]: E0130 13:01:35.365713 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:35.366150 kubelet[2555]: E0130 13:01:35.366124 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:01:35.820395 kernel: bpftool[5095]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:01:35.945549 systemd-networkd[1376]: calic26892fb2ef: Gained IPv6LL Jan 30 13:01:36.039077 systemd-networkd[1376]: vxlan.calico: Link UP Jan 30 13:01:36.039084 systemd-networkd[1376]: vxlan.calico: Gained carrier Jan 30 13:01:36.138201 systemd-networkd[1376]: cali545efb5695f: Gained IPv6LL Jan 30 13:01:36.737489 containerd[1449]: time="2025-01-30T13:01:36.737434734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:36.738698 containerd[1449]: time="2025-01-30T13:01:36.738483765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 13:01:36.739518 containerd[1449]: time="2025-01-30T13:01:36.739477034Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:36.741797 containerd[1449]: time="2025-01-30T13:01:36.741756100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:36.742719 containerd[1449]: time="2025-01-30T13:01:36.742685127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.066261056s" Jan 30 13:01:36.742758 containerd[1449]: time="2025-01-30T13:01:36.742725369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 13:01:36.744109 containerd[1449]: time="2025-01-30T13:01:36.743882842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:01:36.756920 containerd[1449]: time="2025-01-30T13:01:36.756853820Z" level=info msg="CreateContainer within sandbox \"cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:01:36.772865 containerd[1449]: time="2025-01-30T13:01:36.772804205Z" level=info msg="CreateContainer within sandbox \"cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5f91d7d3802e7950a0f9b4dcaf603e695103102116c43fa61fb26a9f77c2a72d\"" Jan 30 13:01:36.773344 containerd[1449]: time="2025-01-30T13:01:36.773315300Z" level=info msg="StartContainer for \"5f91d7d3802e7950a0f9b4dcaf603e695103102116c43fa61fb26a9f77c2a72d\"" Jan 30 13:01:36.801553 systemd[1]: Started cri-containerd-5f91d7d3802e7950a0f9b4dcaf603e695103102116c43fa61fb26a9f77c2a72d.scope - libcontainer container 5f91d7d3802e7950a0f9b4dcaf603e695103102116c43fa61fb26a9f77c2a72d. Jan 30 13:01:36.846559 containerd[1449]: time="2025-01-30T13:01:36.846503314Z" level=info msg="StartContainer for \"5f91d7d3802e7950a0f9b4dcaf603e695103102116c43fa61fb26a9f77c2a72d\" returns successfully" Jan 30 13:01:37.097508 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jan 30 13:01:37.425749 kubelet[2555]: I0130 13:01:37.424620 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cdf574769-hlwjk" podStartSLOduration=27.356965611 podStartE2EDuration="29.424600548s" podCreationTimestamp="2025-01-30 13:01:08 +0000 UTC" firstStartedPulling="2025-01-30 13:01:34.67607954 +0000 UTC m=+50.724119766" lastFinishedPulling="2025-01-30 13:01:36.743714437 +0000 UTC m=+52.791754703" observedRunningTime="2025-01-30 13:01:37.387195114 +0000 UTC m=+53.435235380" watchObservedRunningTime="2025-01-30 13:01:37.424600548 +0000 UTC m=+53.472640814" Jan 30 13:01:37.815656 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:48292.service - OpenSSH per-connection server daemon (10.0.0.1:48292). Jan 30 13:01:37.863982 containerd[1449]: time="2025-01-30T13:01:37.861688907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:37.863982 containerd[1449]: time="2025-01-30T13:01:37.862541491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 13:01:37.865160 containerd[1449]: time="2025-01-30T13:01:37.865126486Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:37.867267 containerd[1449]: time="2025-01-30T13:01:37.867233586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:37.868532 containerd[1449]: time="2025-01-30T13:01:37.868500423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.124584059s" Jan 30 13:01:37.868532 containerd[1449]: time="2025-01-30T13:01:37.868533504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 13:01:37.872579 containerd[1449]: time="2025-01-30T13:01:37.872539779Z" level=info msg="CreateContainer within sandbox \"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:01:37.894584 containerd[1449]: time="2025-01-30T13:01:37.894383966Z" level=info msg="CreateContainer within sandbox \"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a94d1e893479e2c574d239dc315bd800f82eb0547362ede61c08e7ea363003d8\"" Jan 30 13:01:37.896217 containerd[1449]: time="2025-01-30T13:01:37.895032825Z" level=info msg="StartContainer for \"a94d1e893479e2c574d239dc315bd800f82eb0547362ede61c08e7ea363003d8\"" Jan 30 13:01:37.900215 sshd[5285]: Accepted publickey for core from 10.0.0.1 port 48292 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:37.902536 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:37.913998 systemd-logind[1427]: New session 14 of user core. Jan 30 13:01:37.920594 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:01:37.947054 systemd[1]: Started cri-containerd-a94d1e893479e2c574d239dc315bd800f82eb0547362ede61c08e7ea363003d8.scope - libcontainer container a94d1e893479e2c574d239dc315bd800f82eb0547362ede61c08e7ea363003d8. Jan 30 13:01:37.985185 containerd[1449]: time="2025-01-30T13:01:37.985125374Z" level=info msg="StartContainer for \"a94d1e893479e2c574d239dc315bd800f82eb0547362ede61c08e7ea363003d8\" returns successfully" Jan 30 13:01:37.987953 containerd[1449]: time="2025-01-30T13:01:37.987915454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:01:38.092286 sshd[5285]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:38.102132 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:48292.service: Deactivated successfully. Jan 30 13:01:38.104151 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:01:38.104849 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:01:38.110646 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:48304.service - OpenSSH per-connection server daemon (10.0.0.1:48304). Jan 30 13:01:38.112164 systemd-logind[1427]: Removed session 14. Jan 30 13:01:38.142685 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 48304 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:38.144090 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:38.148444 systemd-logind[1427]: New session 15 of user core. Jan 30 13:01:38.160578 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:01:38.463227 sshd[5334]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:38.474510 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:48304.service: Deactivated successfully. Jan 30 13:01:38.476402 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:01:38.478525 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:01:38.489847 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:48312.service - OpenSSH per-connection server daemon (10.0.0.1:48312). Jan 30 13:01:38.491412 systemd-logind[1427]: Removed session 15. Jan 30 13:01:38.551333 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 48312 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:38.553336 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:38.563072 systemd-logind[1427]: New session 16 of user core. Jan 30 13:01:38.568645 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:01:39.250555 containerd[1449]: time="2025-01-30T13:01:39.250500230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:39.251864 containerd[1449]: time="2025-01-30T13:01:39.251591740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 13:01:39.258693 containerd[1449]: time="2025-01-30T13:01:39.258631057Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:39.269740 containerd[1449]: time="2025-01-30T13:01:39.269683167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:01:39.272197 containerd[1449]: time="2025-01-30T13:01:39.270609552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.282653377s" Jan 30 13:01:39.272197 containerd[1449]: time="2025-01-30T13:01:39.270653994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 13:01:39.275687 containerd[1449]: time="2025-01-30T13:01:39.275612772Z" level=info msg="CreateContainer within sandbox \"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:01:39.322639 containerd[1449]: time="2025-01-30T13:01:39.322561766Z" level=info msg="CreateContainer within sandbox \"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6f995d0e4ec9542b12c1e0376d3470a028436e881635a5e4ce0832b6a3017a86\"" Jan 30 13:01:39.324132 containerd[1449]: time="2025-01-30T13:01:39.324078848Z" level=info msg="StartContainer for \"6f995d0e4ec9542b12c1e0376d3470a028436e881635a5e4ce0832b6a3017a86\"" Jan 30 13:01:39.349468 systemd[1]: run-containerd-runc-k8s.io-6f995d0e4ec9542b12c1e0376d3470a028436e881635a5e4ce0832b6a3017a86-runc.cS3UV5.mount: Deactivated successfully. Jan 30 13:01:39.361599 systemd[1]: Started cri-containerd-6f995d0e4ec9542b12c1e0376d3470a028436e881635a5e4ce0832b6a3017a86.scope - libcontainer container 6f995d0e4ec9542b12c1e0376d3470a028436e881635a5e4ce0832b6a3017a86. Jan 30 13:01:39.399807 containerd[1449]: time="2025-01-30T13:01:39.399636962Z" level=info msg="StartContainer for \"6f995d0e4ec9542b12c1e0376d3470a028436e881635a5e4ce0832b6a3017a86\" returns successfully" Jan 30 13:01:40.165175 sshd[5349]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:40.174654 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:48312.service: Deactivated successfully. Jan 30 13:01:40.176812 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:01:40.182223 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:01:40.190742 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:48314.service - OpenSSH per-connection server daemon (10.0.0.1:48314). Jan 30 13:01:40.194807 systemd-logind[1427]: Removed session 16. Jan 30 13:01:40.235184 kubelet[2555]: I0130 13:01:40.235032 2555 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:01:40.236236 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 48314 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:40.237275 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:40.242058 systemd-logind[1427]: New session 17 of user core. Jan 30 13:01:40.242888 kubelet[2555]: I0130 13:01:40.242849 2555 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:01:40.252638 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:01:40.590081 sshd[5415]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:40.599413 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:48314.service: Deactivated successfully. Jan 30 13:01:40.601151 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:01:40.602869 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:01:40.609023 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:48324.service - OpenSSH per-connection server daemon (10.0.0.1:48324). Jan 30 13:01:40.612656 systemd-logind[1427]: Removed session 17. Jan 30 13:01:40.644251 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 48324 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:40.645462 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:40.650121 systemd-logind[1427]: New session 18 of user core. Jan 30 13:01:40.656567 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:01:40.836803 sshd[5429]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:40.841031 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:48324.service: Deactivated successfully. Jan 30 13:01:40.844039 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:01:40.846028 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:01:40.846903 systemd-logind[1427]: Removed session 18. Jan 30 13:01:44.088270 containerd[1449]: time="2025-01-30T13:01:44.088207350Z" level=info msg="StopPodSandbox for \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\"" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.142 [WARNING][5466] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0", GenerateName:"calico-kube-controllers-6cdf574769-", Namespace:"calico-system", SelfLink:"", UID:"130f9fb6-465d-46d9-b55d-70f0e7e76a1d", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cdf574769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3", Pod:"calico-kube-controllers-6cdf574769-hlwjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic26892fb2ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.143 [INFO][5466] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.143 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" iface="eth0" netns="" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.143 [INFO][5466] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.143 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.171 [INFO][5475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.171 [INFO][5475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.172 [INFO][5475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.183 [WARNING][5475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.183 [INFO][5475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.185 [INFO][5475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:44.188613 containerd[1449]: 2025-01-30 13:01:44.187 [INFO][5466] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.189060 containerd[1449]: time="2025-01-30T13:01:44.188651367Z" level=info msg="TearDown network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\" successfully" Jan 30 13:01:44.189060 containerd[1449]: time="2025-01-30T13:01:44.188675888Z" level=info msg="StopPodSandbox for \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\" returns successfully" Jan 30 13:01:44.190970 containerd[1449]: time="2025-01-30T13:01:44.189130020Z" level=info msg="RemovePodSandbox for \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\"" Jan 30 13:01:44.190970 containerd[1449]: time="2025-01-30T13:01:44.190877346Z" level=info msg="Forcibly stopping sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\"" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.235 [WARNING][5497] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0", GenerateName:"calico-kube-controllers-6cdf574769-", Namespace:"calico-system", SelfLink:"", UID:"130f9fb6-465d-46d9-b55d-70f0e7e76a1d", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cdf574769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd1974ae9e44f8ea6eb2f637cbc6cb1180c1e91e7bd42033374f3e5a357a72d3", Pod:"calico-kube-controllers-6cdf574769-hlwjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic26892fb2ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.235 [INFO][5497] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.235 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" iface="eth0" netns="" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.235 [INFO][5497] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.235 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.261 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.262 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.262 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.273 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.273 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" HandleID="k8s-pod-network.9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Workload="localhost-k8s-calico--kube--controllers--6cdf574769--hlwjk-eth0" Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.276 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:44.280123 containerd[1449]: 2025-01-30 13:01:44.278 [INFO][5497] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68" Jan 30 13:01:44.280578 containerd[1449]: time="2025-01-30T13:01:44.280166428Z" level=info msg="TearDown network for sandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\" successfully" Jan 30 13:01:44.356877 containerd[1449]: time="2025-01-30T13:01:44.356712173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:01:44.356877 containerd[1449]: time="2025-01-30T13:01:44.356818496Z" level=info msg="RemovePodSandbox \"9379cd9851bc9c02724dd5e0ca2b3f3d0bf33bad38703909d64a67826ae63e68\" returns successfully" Jan 30 13:01:44.358842 containerd[1449]: time="2025-01-30T13:01:44.358537262Z" level=info msg="StopPodSandbox for \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\"" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.400 [WARNING][5529] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b4c1231-0ce0-43a4-b55a-08522cf916ab", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd", Pod:"coredns-7db6d8ff4d-dj4pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie22cc9610c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.400 [INFO][5529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.400 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" iface="eth0" netns="" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.400 [INFO][5529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.400 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.425 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.425 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.425 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.436 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.436 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.438 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:44.442221 containerd[1449]: 2025-01-30 13:01:44.439 [INFO][5529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.443981 containerd[1449]: time="2025-01-30T13:01:44.442264796Z" level=info msg="TearDown network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\" successfully" Jan 30 13:01:44.443981 containerd[1449]: time="2025-01-30T13:01:44.442292557Z" level=info msg="StopPodSandbox for \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\" returns successfully" Jan 30 13:01:44.443981 containerd[1449]: time="2025-01-30T13:01:44.443615872Z" level=info msg="RemovePodSandbox for \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\"" Jan 30 13:01:44.443981 containerd[1449]: time="2025-01-30T13:01:44.443650353Z" level=info msg="Forcibly stopping sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\"" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.487 [WARNING][5559] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7b4c1231-0ce0-43a4-b55a-08522cf916ab", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c87be7ff546b1284a1f2c78cd8c711a25af187bf08a44f1e128825806d1646cd", Pod:"coredns-7db6d8ff4d-dj4pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie22cc9610c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.488 [INFO][5559] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.488 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" iface="eth0" netns="" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.488 [INFO][5559] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.488 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.514 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.514 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.514 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.525 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.525 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" HandleID="k8s-pod-network.200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Workload="localhost-k8s-coredns--7db6d8ff4d--dj4pf-eth0" Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.526 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:44.531142 containerd[1449]: 2025-01-30 13:01:44.529 [INFO][5559] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1" Jan 30 13:01:44.531579 containerd[1449]: time="2025-01-30T13:01:44.531176149Z" level=info msg="TearDown network for sandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\" successfully" Jan 30 13:01:44.534399 containerd[1449]: time="2025-01-30T13:01:44.534342912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:01:44.534491 containerd[1449]: time="2025-01-30T13:01:44.534425034Z" level=info msg="RemovePodSandbox \"200ac36bdd18db782aed819c75e248d183df8a31c5e35a6536dd15f0ba9bf0a1\" returns successfully" Jan 30 13:01:44.535101 containerd[1449]: time="2025-01-30T13:01:44.535040571Z" level=info msg="StopPodSandbox for \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\"" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.593 [WARNING][5589] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"09ee933e-c443-4e73-95ea-87ea4c5a82d4", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a", Pod:"coredns-7db6d8ff4d-bs7p9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d40de56a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.593 [INFO][5589] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.593 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" iface="eth0" netns="" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.593 [INFO][5589] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.593 [INFO][5589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.616 [INFO][5597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.617 [INFO][5597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.617 [INFO][5597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.639 [WARNING][5597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.639 [INFO][5597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.644 [INFO][5597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:44.647907 containerd[1449]: 2025-01-30 13:01:44.646 [INFO][5589] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.648513 containerd[1449]: time="2025-01-30T13:01:44.647951758Z" level=info msg="TearDown network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\" successfully" Jan 30 13:01:44.648513 containerd[1449]: time="2025-01-30T13:01:44.647978318Z" level=info msg="StopPodSandbox for \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\" returns successfully" Jan 30 13:01:44.648556 containerd[1449]: time="2025-01-30T13:01:44.648514853Z" level=info msg="RemovePodSandbox for \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\"" Jan 30 13:01:44.648556 containerd[1449]: time="2025-01-30T13:01:44.648543373Z" level=info msg="Forcibly stopping sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\"" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.695 [WARNING][5620] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"09ee933e-c443-4e73-95ea-87ea4c5a82d4", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 0, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"934e2c92125ff4b19c4578e2610ce34f872cb0431fed3da90a7ae53903ab4a8a", Pod:"coredns-7db6d8ff4d-bs7p9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d40de56a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.695 [INFO][5620] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.695 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" iface="eth0" netns="" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.695 [INFO][5620] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.695 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.723 [INFO][5627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.723 [INFO][5627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.723 [INFO][5627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.735 [WARNING][5627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.735 [INFO][5627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" HandleID="k8s-pod-network.21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Workload="localhost-k8s-coredns--7db6d8ff4d--bs7p9-eth0" Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.745 [INFO][5627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:44.750118 containerd[1449]: 2025-01-30 13:01:44.747 [INFO][5620] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73" Jan 30 13:01:44.751831 containerd[1449]: time="2025-01-30T13:01:44.750154701Z" level=info msg="TearDown network for sandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\" successfully" Jan 30 13:01:44.842698 containerd[1449]: time="2025-01-30T13:01:44.842629828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:01:44.842989 containerd[1449]: time="2025-01-30T13:01:44.842712590Z" level=info msg="RemovePodSandbox \"21d28bba4eb54f477490e0eae831a65c207d5c08fdcd57212d2a2ef151a6ca73\" returns successfully" Jan 30 13:01:44.843551 containerd[1449]: time="2025-01-30T13:01:44.843229284Z" level=info msg="StopPodSandbox for \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\"" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.889 [WARNING][5648] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--95cpz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa3d79e1-a896-409f-b82d-b2c0db403513", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1", Pod:"csi-node-driver-95cpz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali545efb5695f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.889 [INFO][5648] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.889 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" iface="eth0" netns="" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.889 [INFO][5648] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.889 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.918 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.918 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.918 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.928 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.928 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.930 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:44.936938 containerd[1449]: 2025-01-30 13:01:44.934 [INFO][5648] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:44.937988 containerd[1449]: time="2025-01-30T13:01:44.937465657Z" level=info msg="TearDown network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\" successfully" Jan 30 13:01:44.937988 containerd[1449]: time="2025-01-30T13:01:44.937500897Z" level=info msg="StopPodSandbox for \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\" returns successfully" Jan 30 13:01:44.938412 containerd[1449]: time="2025-01-30T13:01:44.938275678Z" level=info msg="RemovePodSandbox for \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\"" Jan 30 13:01:44.938412 containerd[1449]: time="2025-01-30T13:01:44.938327039Z" level=info msg="Forcibly stopping sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\"" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:44.986 [WARNING][5678] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--95cpz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa3d79e1-a896-409f-b82d-b2c0db403513", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef0eed93feb30e202eddbf68014beca5cd2874d6c09bf86d11034763c91a3bc1", Pod:"csi-node-driver-95cpz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali545efb5695f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:44.986 [INFO][5678] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:44.986 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" iface="eth0" netns="" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:44.986 [INFO][5678] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:44.986 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:45.007 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:45.008 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:45.008 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:45.022 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:45.022 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" HandleID="k8s-pod-network.19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Workload="localhost-k8s-csi--node--driver--95cpz-eth0" Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:45.024 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:45.032008 containerd[1449]: 2025-01-30 13:01:45.026 [INFO][5678] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2" Jan 30 13:01:45.032008 containerd[1449]: time="2025-01-30T13:01:45.030981323Z" level=info msg="TearDown network for sandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\" successfully" Jan 30 13:01:45.066404 containerd[1449]: time="2025-01-30T13:01:45.066340450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:01:45.066655 containerd[1449]: time="2025-01-30T13:01:45.066431452Z" level=info msg="RemovePodSandbox \"19e4480114b6a42d04ce84831a85ae96dac0f1fcb1e884fb28bfaf9edea6bdd2\" returns successfully" Jan 30 13:01:45.067203 containerd[1449]: time="2025-01-30T13:01:45.067131070Z" level=info msg="StopPodSandbox for \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\"" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.134 [WARNING][5708] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9715b94b-ac1c-4ef7-884a-0cc0442ebce5", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8", Pod:"calico-apiserver-788bf5f94c-fbrwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali806d440a8c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.134 [INFO][5708] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.134 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" iface="eth0" netns="" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.134 [INFO][5708] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.134 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.161 [INFO][5717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.161 [INFO][5717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.161 [INFO][5717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.170 [WARNING][5717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.171 [INFO][5717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.173 [INFO][5717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:45.176836 containerd[1449]: 2025-01-30 13:01:45.174 [INFO][5708] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.177621 containerd[1449]: time="2025-01-30T13:01:45.176907067Z" level=info msg="TearDown network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\" successfully" Jan 30 13:01:45.177621 containerd[1449]: time="2025-01-30T13:01:45.176933708Z" level=info msg="StopPodSandbox for \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\" returns successfully" Jan 30 13:01:45.177621 containerd[1449]: time="2025-01-30T13:01:45.177468082Z" level=info msg="RemovePodSandbox for \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\"" Jan 30 13:01:45.177621 containerd[1449]: time="2025-01-30T13:01:45.177502603Z" level=info msg="Forcibly stopping sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\"" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.232 [WARNING][5741] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9715b94b-ac1c-4ef7-884a-0cc0442ebce5", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa1b3abeda315ba11a4f37eba84a2244b0c6872afcae5647b9791ea51aeefb8", Pod:"calico-apiserver-788bf5f94c-fbrwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali806d440a8c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.232 [INFO][5741] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.232 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" iface="eth0" netns="" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.232 [INFO][5741] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.232 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.260 [INFO][5749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.260 [INFO][5749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.260 [INFO][5749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.271 [WARNING][5749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.271 [INFO][5749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" HandleID="k8s-pod-network.ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Workload="localhost-k8s-calico--apiserver--788bf5f94c--fbrwl-eth0" Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.273 [INFO][5749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:45.278477 containerd[1449]: 2025-01-30 13:01:45.275 [INFO][5741] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce" Jan 30 13:01:45.278477 containerd[1449]: time="2025-01-30T13:01:45.277535144Z" level=info msg="TearDown network for sandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\" successfully" Jan 30 13:01:45.284731 containerd[1449]: time="2025-01-30T13:01:45.284606289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:01:45.284954 containerd[1449]: time="2025-01-30T13:01:45.284744493Z" level=info msg="RemovePodSandbox \"ae690b24e80b5313fac12b9979f4518db3deefb4af4a1a1e9bac0ddc673da5ce\" returns successfully" Jan 30 13:01:45.285783 containerd[1449]: time="2025-01-30T13:01:45.285468352Z" level=info msg="StopPodSandbox for \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\"" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.336 [WARNING][5771] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"29437a55-4f91-4be7-b561-40aba478f597", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1", Pod:"calico-apiserver-788bf5f94c-wlb8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32d163507f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.336 [INFO][5771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.337 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" iface="eth0" netns="" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.337 [INFO][5771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.337 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.359 [INFO][5778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.360 [INFO][5778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.360 [INFO][5778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.369 [WARNING][5778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.369 [INFO][5778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.371 [INFO][5778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:45.374807 containerd[1449]: 2025-01-30 13:01:45.372 [INFO][5771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.374807 containerd[1449]: time="2025-01-30T13:01:45.374685570Z" level=info msg="TearDown network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\" successfully" Jan 30 13:01:45.374807 containerd[1449]: time="2025-01-30T13:01:45.374712810Z" level=info msg="StopPodSandbox for \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\" returns successfully" Jan 30 13:01:45.375841 containerd[1449]: time="2025-01-30T13:01:45.375712237Z" level=info msg="RemovePodSandbox for \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\"" Jan 30 13:01:45.375841 containerd[1449]: time="2025-01-30T13:01:45.375757278Z" level=info msg="Forcibly stopping sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\"" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.426 [WARNING][5800] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0", GenerateName:"calico-apiserver-788bf5f94c-", Namespace:"calico-apiserver", SelfLink:"", UID:"29437a55-4f91-4be7-b561-40aba478f597", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"788bf5f94c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fbc5ae42f825d6ae51440407bf112bab3c0af72d4147187235f3a7b3ba907f1", Pod:"calico-apiserver-788bf5f94c-wlb8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32d163507f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.426 [INFO][5800] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.426 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" iface="eth0" netns="" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.426 [INFO][5800] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.426 [INFO][5800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.445 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.446 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.446 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.455 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.455 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" HandleID="k8s-pod-network.abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Workload="localhost-k8s-calico--apiserver--788bf5f94c--wlb8t-eth0" Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.457 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:01:45.460962 containerd[1449]: 2025-01-30 13:01:45.459 [INFO][5800] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8" Jan 30 13:01:45.461362 containerd[1449]: time="2025-01-30T13:01:45.460983791Z" level=info msg="TearDown network for sandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\" successfully" Jan 30 13:01:45.463813 containerd[1449]: time="2025-01-30T13:01:45.463765184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:01:45.463907 containerd[1449]: time="2025-01-30T13:01:45.463830506Z" level=info msg="RemovePodSandbox \"abd1423df66d8722187b43e0502d6b5d0554467d57efe5adddfbcfa5d289a5e8\" returns successfully" Jan 30 13:01:45.848673 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:37692.service - OpenSSH per-connection server daemon (10.0.0.1:37692). Jan 30 13:01:45.909486 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 37692 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:45.910982 sshd[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:45.914923 systemd-logind[1427]: New session 19 of user core. Jan 30 13:01:45.920601 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:01:46.106756 sshd[5817]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:46.110585 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:37692.service: Deactivated successfully. Jan 30 13:01:46.112321 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:01:46.113840 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:01:46.115137 systemd-logind[1427]: Removed session 19. Jan 30 13:01:50.436834 kubelet[2555]: I0130 13:01:50.436602 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:01:50.461043 kubelet[2555]: I0130 13:01:50.460757 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-95cpz" podStartSLOduration=37.879358283 podStartE2EDuration="42.460736535s" podCreationTimestamp="2025-01-30 13:01:08 +0000 UTC" firstStartedPulling="2025-01-30 13:01:34.691560926 +0000 UTC m=+50.739601192" lastFinishedPulling="2025-01-30 13:01:39.272939218 +0000 UTC m=+55.320979444" observedRunningTime="2025-01-30 13:01:40.414890979 +0000 UTC m=+56.462931205" watchObservedRunningTime="2025-01-30 13:01:50.460736535 +0000 UTC m=+66.508776921" Jan 30 13:01:51.118916 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:37706.service - OpenSSH per-connection server daemon (10.0.0.1:37706). Jan 30 13:01:51.162440 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 37706 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:51.164227 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:51.173632 systemd-logind[1427]: New session 20 of user core. Jan 30 13:01:51.185613 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:01:51.373860 sshd[5863]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:51.380780 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:37706.service: Deactivated successfully. Jan 30 13:01:51.385122 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:01:51.386974 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:01:51.390183 systemd-logind[1427]: Removed session 20. Jan 30 13:01:56.399722 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:60370.service - OpenSSH per-connection server daemon (10.0.0.1:60370). Jan 30 13:01:56.442373 sshd[5877]: Accepted publickey for core from 10.0.0.1 port 60370 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:01:56.445082 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:01:56.456105 systemd-logind[1427]: New session 21 of user core. Jan 30 13:01:56.467603 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:01:56.619123 sshd[5877]: pam_unix(sshd:session): session closed for user core Jan 30 13:01:56.624317 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:60370.service: Deactivated successfully. Jan 30 13:01:56.627409 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:01:56.628665 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:01:56.629666 systemd-logind[1427]: Removed session 21. Jan 30 13:02:01.631549 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:60378.service - OpenSSH per-connection server daemon (10.0.0.1:60378). Jan 30 13:02:01.679775 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 60378 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:02:01.681393 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:01.685376 systemd-logind[1427]: New session 22 of user core. Jan 30 13:02:01.697601 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:02:01.846080 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:01.850382 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:60378.service: Deactivated successfully. Jan 30 13:02:01.852256 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:02:01.852928 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:02:01.853989 systemd-logind[1427]: Removed session 22. Jan 30 13:02:03.363743 kubelet[2555]: E0130 13:02:03.362158 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"