Jan 13 21:20:46.900429 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:20:46.900449 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:20:46.900458 kernel: KASLR enabled Jan 13 21:20:46.900464 kernel: efi: EFI v2.7 by EDK II Jan 13 21:20:46.900470 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:20:46.900476 kernel: random: crng init done Jan 13 21:20:46.900483 kernel: ACPI: Early table checksum verification disabled Jan 13 21:20:46.900489 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:20:46.900495 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:20:46.900503 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900509 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900515 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900522 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900528 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900536 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900543 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900550 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900557 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:46.900563 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:20:46.900569 kernel: NUMA: Failed to initialise from firmware Jan 13 21:20:46.900576 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:20:46.900582 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 13 21:20:46.900589 kernel: Zone ranges: Jan 13 21:20:46.900595 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:20:46.900602 kernel: DMA32 empty Jan 13 21:20:46.900609 kernel: Normal empty Jan 13 21:20:46.900615 kernel: Movable zone start for each node Jan 13 21:20:46.900622 kernel: Early memory node ranges Jan 13 21:20:46.900628 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:20:46.900635 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:20:46.900641 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:20:46.900647 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:20:46.900654 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:20:46.900660 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:20:46.900666 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:20:46.900672 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:20:46.900679 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:20:46.900686 kernel: psci: probing for conduit method from ACPI. Jan 13 21:20:46.900692 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:20:46.900699 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:20:46.900708 kernel: psci: Trusted OS migration not required Jan 13 21:20:46.900714 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:20:46.900721 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:20:46.900729 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:20:46.900736 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:20:46.900743 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:20:46.900750 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:20:46.900756 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:20:46.900763 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:20:46.900770 kernel: CPU features: detected: Spectre-v4 Jan 13 21:20:46.900776 kernel: CPU features: detected: Spectre-BHB Jan 13 21:20:46.900783 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:20:46.900790 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:20:46.900798 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:20:46.900804 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:20:46.900811 kernel: alternatives: applying boot alternatives Jan 13 21:20:46.900819 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:20:46.900826 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:20:46.900833 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:20:46.900839 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:20:46.900846 kernel: Fallback order for Node 0: 0 Jan 13 21:20:46.900853 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:20:46.900859 kernel: Policy zone: DMA Jan 13 21:20:46.900866 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:20:46.900874 kernel: software IO TLB: area num 4. Jan 13 21:20:46.900881 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:20:46.900888 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Jan 13 21:20:46.900918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:20:46.900927 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:20:46.900934 kernel: rcu: RCU event tracing is enabled. Jan 13 21:20:46.900941 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:20:46.900948 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:20:46.900955 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:20:46.900962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:20:46.900969 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:20:46.900976 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:20:46.900984 kernel: GICv3: 256 SPIs implemented Jan 13 21:20:46.900991 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:20:46.900998 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:20:46.901004 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:20:46.901011 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:20:46.901018 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:20:46.901025 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:20:46.901032 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:20:46.901039 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:20:46.901045 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:20:46.901052 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:20:46.901060 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:20:46.901067 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:20:46.901074 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:20:46.901081 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:20:46.901088 kernel: arm-pv: using stolen time PV Jan 13 21:20:46.901095 kernel: Console: colour dummy device 80x25 Jan 13 21:20:46.901108 kernel: ACPI: Core revision 20230628 Jan 13 21:20:46.901116 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:20:46.901123 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:20:46.901130 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:20:46.901139 kernel: landlock: Up and running. Jan 13 21:20:46.901146 kernel: SELinux: Initializing. Jan 13 21:20:46.901152 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:20:46.901160 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:20:46.901167 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:20:46.901174 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:20:46.901181 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:20:46.901188 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:20:46.901195 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:20:46.901203 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:20:46.901210 kernel: Remapping and enabling EFI services. Jan 13 21:20:46.901217 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:20:46.901223 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:20:46.901230 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:20:46.901237 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:20:46.901244 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:20:46.901251 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:20:46.901258 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:20:46.901265 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:20:46.901273 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:20:46.901280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:20:46.901291 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:20:46.901300 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:20:46.901307 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:20:46.901314 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:20:46.901322 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:20:46.901329 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:20:46.901336 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:20:46.901345 kernel: SMP: Total of 4 processors activated. Jan 13 21:20:46.901352 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:20:46.901359 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:20:46.901367 kernel: CPU features: detected: Common not Private translations Jan 13 21:20:46.901374 kernel: CPU features: detected: CRC32 instructions Jan 13 21:20:46.901381 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:20:46.901389 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:20:46.901396 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:20:46.901404 kernel: CPU features: detected: Privileged Access Never Jan 13 21:20:46.901411 kernel: CPU features: detected: RAS Extension Support Jan 13 21:20:46.901419 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:20:46.901426 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:20:46.901433 kernel: alternatives: applying system-wide alternatives Jan 13 21:20:46.901440 kernel: devtmpfs: initialized Jan 13 21:20:46.901448 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:20:46.901455 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:20:46.901462 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:20:46.901471 kernel: SMBIOS 3.0.0 present. Jan 13 21:20:46.901478 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:20:46.901485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:20:46.901492 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:20:46.901500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:20:46.901507 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:20:46.901515 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:20:46.901522 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 13 21:20:46.901529 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:20:46.901537 kernel: cpuidle: using governor menu Jan 13 21:20:46.901544 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:20:46.901552 kernel: ASID allocator initialised with 32768 entries Jan 13 21:20:46.901559 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:20:46.901566 kernel: Serial: AMBA PL011 UART driver Jan 13 21:20:46.901573 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:20:46.901580 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:20:46.901588 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:20:46.901595 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:20:46.901603 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:20:46.901611 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:20:46.901618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:20:46.901625 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:20:46.901633 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:20:46.901640 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:20:46.901647 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:20:46.901654 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:20:46.901661 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:20:46.901670 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:20:46.901677 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:20:46.901684 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:20:46.901691 kernel: ACPI: Interpreter enabled Jan 13 21:20:46.901699 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:20:46.901706 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:20:46.901713 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:20:46.901720 kernel: printk: console [ttyAMA0] enabled Jan 13 21:20:46.901727 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:20:46.901856 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:20:46.901957 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:20:46.902026 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:20:46.902088 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:20:46.902160 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:20:46.902172 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:20:46.902179 kernel: PCI host bridge to bus 0000:00 Jan 13 21:20:46.902255 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:20:46.902330 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:20:46.902389 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:20:46.902447 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:20:46.902524 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:20:46.902597 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:20:46.902665 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:20:46.902730 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:20:46.902793 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:20:46.902857 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:20:46.902934 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:20:46.903001 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:20:46.903059 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:20:46.903131 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:20:46.903190 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:20:46.903199 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:20:46.903207 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:20:46.903214 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:20:46.903221 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:20:46.903229 kernel: iommu: Default domain type: Translated Jan 13 21:20:46.903237 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:20:46.903246 kernel: efivars: Registered efivars operations Jan 13 21:20:46.903254 kernel: vgaarb: loaded Jan 13 21:20:46.903261 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:20:46.903268 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:20:46.903276 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:20:46.903283 kernel: pnp: PnP ACPI init Jan 13 21:20:46.903357 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:20:46.903367 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:20:46.903374 kernel: NET: Registered PF_INET protocol family Jan 13 21:20:46.903383 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:20:46.903391 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:20:46.903398 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:20:46.903405 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:20:46.903413 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:20:46.903420 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:20:46.903427 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:20:46.903434 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:20:46.903442 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:20:46.903450 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:20:46.903457 kernel: kvm [1]: HYP mode not available Jan 13 21:20:46.903465 kernel: Initialise system trusted keyrings Jan 13 21:20:46.903472 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:20:46.903479 kernel: Key type asymmetric registered Jan 13 21:20:46.903486 kernel: Asymmetric key parser 'x509' registered Jan 13 21:20:46.903493 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:20:46.903501 kernel: io scheduler mq-deadline registered Jan 13 21:20:46.903508 kernel: io scheduler kyber registered Jan 13 21:20:46.903516 kernel: io scheduler bfq registered Jan 13 21:20:46.903524 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:20:46.903531 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:20:46.903538 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:20:46.903616 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:20:46.903626 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:20:46.903633 kernel: thunder_xcv, ver 1.0 Jan 13 21:20:46.903641 kernel: thunder_bgx, ver 1.0 Jan 13 21:20:46.903648 kernel: nicpf, ver 1.0 Jan 13 21:20:46.903657 kernel: nicvf, ver 1.0 Jan 13 21:20:46.903729 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:20:46.903793 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:20:46 UTC (1736803246) Jan 13 21:20:46.903803 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:20:46.903811 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:20:46.903818 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:20:46.903825 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:20:46.903833 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:20:46.903842 kernel: Segment Routing with IPv6 Jan 13 21:20:46.903849 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:20:46.903857 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:20:46.903864 kernel: Key type dns_resolver registered Jan 13 21:20:46.903871 kernel: registered taskstats version 1 Jan 13 21:20:46.903878 kernel: Loading compiled-in X.509 certificates Jan 13 21:20:46.903886 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:20:46.903893 kernel: Key type .fscrypt registered Jan 13 21:20:46.903911 kernel: Key type fscrypt-provisioning registered Jan 13 21:20:46.903921 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:20:46.903928 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:20:46.903935 kernel: ima: No architecture policies found Jan 13 21:20:46.903943 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:20:46.903950 kernel: clk: Disabling unused clocks Jan 13 21:20:46.903957 kernel: Freeing unused kernel memory: 39360K Jan 13 21:20:46.903964 kernel: Run /init as init process Jan 13 21:20:46.903971 kernel: with arguments: Jan 13 21:20:46.903979 kernel: /init Jan 13 21:20:46.903987 kernel: with environment: Jan 13 21:20:46.903994 kernel: HOME=/ Jan 13 21:20:46.904001 kernel: TERM=linux Jan 13 21:20:46.904008 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:20:46.904017 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:20:46.904026 systemd[1]: Detected virtualization kvm. Jan 13 21:20:46.904034 systemd[1]: Detected architecture arm64. Jan 13 21:20:46.904043 systemd[1]: Running in initrd. Jan 13 21:20:46.904051 systemd[1]: No hostname configured, using default hostname. Jan 13 21:20:46.904058 systemd[1]: Hostname set to . Jan 13 21:20:46.904066 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:20:46.904074 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:20:46.904081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:20:46.904089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:20:46.904097 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:20:46.904114 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:20:46.904122 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:20:46.904130 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:20:46.904139 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:20:46.904147 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:20:46.904155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:20:46.904163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:20:46.904172 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:20:46.904180 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:20:46.904188 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:20:46.904196 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:20:46.904203 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:20:46.904211 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:20:46.904219 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:20:46.904227 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:20:46.904235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:20:46.904244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:20:46.904252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:20:46.904259 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:20:46.904267 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:20:46.904275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:20:46.904283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:20:46.904291 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:20:46.904299 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:20:46.904308 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:20:46.904316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:20:46.904323 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:20:46.904331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:20:46.904339 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:20:46.904347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:20:46.904357 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:20:46.904382 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 21:20:46.904400 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:20:46.904410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:46.904419 systemd-journald[237]: Journal started Jan 13 21:20:46.904437 systemd-journald[237]: Runtime Journal (/run/log/journal/f9c751ac1feb43b9ba4f96c7830237d0) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:20:46.898237 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 21:20:46.908069 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:20:46.910921 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:20:46.913435 kernel: Bridge firewalling registered Jan 13 21:20:46.911204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:20:46.912476 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 21:20:46.912647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:20:46.913675 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:20:46.915514 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:20:46.919713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:20:46.923026 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:20:46.927116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:20:46.938083 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:20:46.939068 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:20:46.941964 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:20:46.954520 dracut-cmdline[280]: dracut-dracut-053 Jan 13 21:20:46.956949 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:20:46.960948 systemd-resolved[272]: Positive Trust Anchors: Jan 13 21:20:46.960964 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:20:46.960996 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:20:46.965536 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 13 21:20:46.966531 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:20:46.968676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:20:47.028925 kernel: SCSI subsystem initialized Jan 13 21:20:47.032915 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:20:47.040924 kernel: iscsi: registered transport (tcp) Jan 13 21:20:47.054242 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:20:47.054284 kernel: QLogic iSCSI HBA Driver Jan 13 21:20:47.101837 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:20:47.108016 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:20:47.128189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:20:47.128997 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:20:47.129009 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:20:47.175921 kernel: raid6: neonx8 gen() 15657 MB/s Jan 13 21:20:47.192922 kernel: raid6: neonx4 gen() 15541 MB/s Jan 13 21:20:47.209926 kernel: raid6: neonx2 gen() 13114 MB/s Jan 13 21:20:47.226922 kernel: raid6: neonx1 gen() 10420 MB/s Jan 13 21:20:47.243921 kernel: raid6: int64x8 gen() 6916 MB/s Jan 13 21:20:47.260918 kernel: raid6: int64x4 gen() 7308 MB/s Jan 13 21:20:47.277912 kernel: raid6: int64x2 gen() 6096 MB/s Jan 13 21:20:47.294911 kernel: raid6: int64x1 gen() 5025 MB/s Jan 13 21:20:47.294924 kernel: raid6: using algorithm neonx8 gen() 15657 MB/s Jan 13 21:20:47.311919 kernel: raid6: .... xor() 11813 MB/s, rmw enabled Jan 13 21:20:47.311934 kernel: raid6: using neon recovery algorithm Jan 13 21:20:47.316958 kernel: xor: measuring software checksum speed Jan 13 21:20:47.316977 kernel: 8regs : 19697 MB/sec Jan 13 21:20:47.318003 kernel: 32regs : 19004 MB/sec Jan 13 21:20:47.318015 kernel: arm64_neon : 26945 MB/sec Jan 13 21:20:47.318024 kernel: xor: using function: arm64_neon (26945 MB/sec) Jan 13 21:20:47.371928 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:20:47.382934 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:20:47.393023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:20:47.405396 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 13 21:20:47.408591 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:20:47.416044 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:20:47.427202 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 13 21:20:47.454887 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:20:47.467031 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:20:47.506276 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:20:47.518130 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:20:47.530254 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:20:47.533584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:20:47.534581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:20:47.536447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:20:47.552072 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:20:47.553686 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:20:47.566834 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:20:47.566969 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:20:47.566981 kernel: GPT:9289727 != 19775487 Jan 13 21:20:47.566991 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:20:47.567001 kernel: GPT:9289727 != 19775487 Jan 13 21:20:47.567010 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:20:47.567024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:20:47.561270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:20:47.561378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:20:47.562968 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:20:47.564533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:20:47.564657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:47.565912 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:20:47.572851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:20:47.575426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:20:47.584921 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (522) Jan 13 21:20:47.584956 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Jan 13 21:20:47.585000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:47.596117 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:20:47.605084 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:20:47.606000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:20:47.611413 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:20:47.615682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:20:47.626104 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:20:47.627590 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:20:47.633314 disk-uuid[550]: Primary Header is updated. Jan 13 21:20:47.633314 disk-uuid[550]: Secondary Entries is updated. Jan 13 21:20:47.633314 disk-uuid[550]: Secondary Header is updated. Jan 13 21:20:47.635907 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:20:47.651846 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:20:48.647916 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:20:48.648468 disk-uuid[551]: The operation has completed successfully. Jan 13 21:20:48.669166 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:20:48.669285 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:20:48.690053 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:20:48.692683 sh[573]: Success Jan 13 21:20:48.710923 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:20:48.738305 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:20:48.749290 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:20:48.750816 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:20:48.760930 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:20:48.760968 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:20:48.760979 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:20:48.762120 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:20:48.762139 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:20:48.765805 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:20:48.767342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:20:48.778058 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:20:48.779395 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:20:48.786371 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:20:48.786411 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:20:48.786422 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:20:48.789053 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:20:48.796509 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:20:48.797934 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:20:48.804039 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:20:48.812064 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:20:48.880980 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:20:48.894025 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:20:48.920514 ignition[662]: Ignition 2.19.0 Jan 13 21:20:48.920528 ignition[662]: Stage: fetch-offline Jan 13 21:20:48.920559 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:48.921458 systemd-networkd[763]: lo: Link UP Jan 13 21:20:48.920569 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:48.921461 systemd-networkd[763]: lo: Gained carrier Jan 13 21:20:48.920713 ignition[662]: parsed url from cmdline: "" Jan 13 21:20:48.922119 systemd-networkd[763]: Enumeration completed Jan 13 21:20:48.920716 ignition[662]: no config URL provided Jan 13 21:20:48.922196 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:20:48.920721 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:20:48.922529 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:20:48.920727 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:20:48.922582 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:20:48.920746 ignition[662]: op(1): [started] loading QEMU firmware config module Jan 13 21:20:48.924172 systemd-networkd[763]: eth0: Link UP Jan 13 21:20:48.920753 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:20:48.924175 systemd-networkd[763]: eth0: Gained carrier Jan 13 21:20:48.932227 ignition[662]: op(1): [finished] loading QEMU firmware config module Jan 13 21:20:48.924182 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:20:48.924236 systemd[1]: Reached target network.target - Network. Jan 13 21:20:48.940937 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:20:48.974077 ignition[662]: parsing config with SHA512: aca8c4e1f99a2aa0fc1289347c5605d2f9afa580b0f5a871a16890536a837c7b7a4eaecf879f9bec8a47f52e90f2719a937747e55a61c6e6f80062658ea3a4d3 Jan 13 21:20:48.979679 unknown[662]: fetched base config from "system" Jan 13 21:20:48.979689 unknown[662]: fetched user config from "qemu" Jan 13 21:20:48.980145 ignition[662]: fetch-offline: fetch-offline passed Jan 13 21:20:48.981631 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:20:48.980209 ignition[662]: Ignition finished successfully Jan 13 21:20:48.983109 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:20:48.991050 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:20:49.001720 ignition[769]: Ignition 2.19.0 Jan 13 21:20:49.001729 ignition[769]: Stage: kargs Jan 13 21:20:49.001889 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:49.001917 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:49.002804 ignition[769]: kargs: kargs passed Jan 13 21:20:49.005460 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:20:49.002843 ignition[769]: Ignition finished successfully Jan 13 21:20:49.015034 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:20:49.023883 ignition[777]: Ignition 2.19.0 Jan 13 21:20:49.023893 ignition[777]: Stage: disks Jan 13 21:20:49.024053 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:49.024073 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:49.026976 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:20:49.024958 ignition[777]: disks: disks passed Jan 13 21:20:49.027762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:20:49.024996 ignition[777]: Ignition finished successfully Jan 13 21:20:49.029066 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:20:49.030265 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:20:49.031690 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:20:49.032805 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:20:49.041012 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:20:49.049995 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:20:49.053831 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:20:49.055869 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:20:49.099695 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:20:49.100822 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:20:49.100711 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:20:49.108999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:20:49.110434 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:20:49.111572 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:20:49.111628 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:20:49.117726 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (795) Jan 13 21:20:49.117746 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:20:49.117762 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:20:49.111652 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:20:49.120501 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:20:49.117228 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:20:49.120483 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:20:49.123925 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:20:49.124509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:20:49.162931 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:20:49.166691 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:20:49.169720 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:20:49.172696 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:20:49.246116 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:20:49.256031 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:20:49.257381 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:20:49.261918 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:20:49.277621 ignition[908]: INFO : Ignition 2.19.0 Jan 13 21:20:49.277621 ignition[908]: INFO : Stage: mount Jan 13 21:20:49.279391 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:49.279391 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:49.279391 ignition[908]: INFO : mount: mount passed Jan 13 21:20:49.279391 ignition[908]: INFO : Ignition finished successfully Jan 13 21:20:49.277830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:20:49.281176 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:20:49.302034 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:20:49.759706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:20:49.773080 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:20:49.777957 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) Jan 13 21:20:49.778008 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:20:49.779377 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:20:49.779392 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:20:49.781918 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:20:49.782652 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:20:49.797557 ignition[937]: INFO : Ignition 2.19.0 Jan 13 21:20:49.797557 ignition[937]: INFO : Stage: files Jan 13 21:20:49.798816 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:49.798816 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:49.798816 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:20:49.801383 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:20:49.801383 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:20:49.804063 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:20:49.805073 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:20:49.805073 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:20:49.804525 unknown[937]: wrote ssh authorized keys file for user: core Jan 13 21:20:49.807823 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:20:49.807823 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:20:49.807823 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:20:49.807823 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:20:49.854062 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:20:49.962856 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:20:49.962856 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:20:49.965633 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 21:20:50.109164 systemd-networkd[763]: eth0: Gained IPv6LL Jan 13 21:20:50.219698 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:20:50.497996 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:20:50.497996 ignition[937]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 13 21:20:50.500820 ignition[937]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:20:50.522631 ignition[937]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:20:50.526710 ignition[937]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:20:50.526710 ignition[937]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:20:50.526710 ignition[937]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:20:50.526710 ignition[937]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:20:50.526710 ignition[937]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:20:50.535557 ignition[937]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:20:50.535557 ignition[937]: INFO : files: files passed Jan 13 21:20:50.535557 ignition[937]: INFO : Ignition finished successfully Jan 13 21:20:50.529316 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:20:50.548068 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:20:50.551296 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:20:50.553811 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:20:50.554645 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:20:50.557665 initrd-setup-root-after-ignition[966]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:20:50.560615 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:20:50.560615 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:20:50.562872 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:20:50.562683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:20:50.563943 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:20:50.573030 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:20:50.591926 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:20:50.592056 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:20:50.593624 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:20:50.594882 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:20:50.596278 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:20:50.596973 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:20:50.611330 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:20:50.618052 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:20:50.625506 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:20:50.626504 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:20:50.627935 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:20:50.629228 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:20:50.629340 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:20:50.631156 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:20:50.632562 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:20:50.633714 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:20:50.634934 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:20:50.636343 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:20:50.637841 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:20:50.639164 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:20:50.640558 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:20:50.641938 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:20:50.643190 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:20:50.644268 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:20:50.644382 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:20:50.646079 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:20:50.647493 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:20:50.648886 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:20:50.649957 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:20:50.651182 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:20:50.651293 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:20:50.653327 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:20:50.653444 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:20:50.654872 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:20:50.656035 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:20:50.656952 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:20:50.658253 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:20:50.659479 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:20:50.660959 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:20:50.661058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:20:50.662174 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:20:50.662253 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:20:50.663451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:20:50.663552 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:20:50.664778 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:20:50.664878 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:20:50.676062 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:20:50.676707 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:20:50.676828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:20:50.679536 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:20:50.680235 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:20:50.680351 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:20:50.681653 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:20:50.681742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:20:50.688236 ignition[992]: INFO : Ignition 2.19.0 Jan 13 21:20:50.688236 ignition[992]: INFO : Stage: umount Jan 13 21:20:50.688236 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:50.688236 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:50.691317 ignition[992]: INFO : umount: umount passed Jan 13 21:20:50.691317 ignition[992]: INFO : Ignition finished successfully Jan 13 21:20:50.689866 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:20:50.690824 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:20:50.693587 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:20:50.694077 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:20:50.694204 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:20:50.695252 systemd[1]: Stopped target network.target - Network. Jan 13 21:20:50.696635 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:20:50.696694 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:20:50.698013 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:20:50.698070 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:20:50.699169 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:20:50.699205 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:20:50.700545 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:20:50.700587 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:20:50.701520 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:20:50.702759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:20:50.704264 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:20:50.704341 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:20:50.705743 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:20:50.705831 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:20:50.709104 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:20:50.709214 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:20:50.711384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:20:50.711441 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:20:50.711966 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 13 21:20:50.713802 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:20:50.714981 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:20:50.716284 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:20:50.716319 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:20:50.726094 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:20:50.727460 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:20:50.727525 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:20:50.728412 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:20:50.728449 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:20:50.729677 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:20:50.729713 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:20:50.731404 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:20:50.739547 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:20:50.739656 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:20:50.750658 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:20:50.750801 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:20:50.752480 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:20:50.752517 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:20:50.754407 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:20:50.754444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:20:50.755672 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:20:50.755713 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:20:50.757765 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:20:50.757805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:20:50.759764 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:20:50.759806 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:20:50.774074 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:20:50.774850 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:20:50.774923 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:20:50.776577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:20:50.776627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:50.779806 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:20:50.779932 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:20:50.781441 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:20:50.783381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:20:50.792226 systemd[1]: Switching root. Jan 13 21:20:50.818376 systemd-journald[237]: Journal stopped Jan 13 21:20:51.529481 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 21:20:51.529537 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:20:51.529550 kernel: SELinux: policy capability open_perms=1 Jan 13 21:20:51.529559 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:20:51.529569 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:20:51.529584 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:20:51.529597 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:20:51.529606 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:20:51.529616 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:20:51.529626 kernel: audit: type=1403 audit(1736803250.998:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:20:51.529637 systemd[1]: Successfully loaded SELinux policy in 32.929ms. Jan 13 21:20:51.529654 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.340ms. Jan 13 21:20:51.529666 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:20:51.529677 systemd[1]: Detected virtualization kvm. Jan 13 21:20:51.529690 systemd[1]: Detected architecture arm64. Jan 13 21:20:51.529700 systemd[1]: Detected first boot. Jan 13 21:20:51.529711 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:20:51.529721 zram_generator::config[1056]: No configuration found. Jan 13 21:20:51.529732 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:20:51.529742 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:20:51.529756 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:20:51.529768 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:20:51.529780 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:20:51.529790 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:20:51.529801 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:20:51.529811 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:20:51.529823 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:20:51.529833 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:20:51.529844 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:20:51.529854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:20:51.529866 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:20:51.529877 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:20:51.529887 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:20:51.529915 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:20:51.529930 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:20:51.529941 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:20:51.529952 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:20:51.529962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:20:51.529973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:20:51.529985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:20:51.529996 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:20:51.530007 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:20:51.530017 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:20:51.530029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:20:51.530047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:20:51.530058 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:20:51.530069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:20:51.530080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:20:51.530093 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:20:51.530104 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:20:51.530115 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:20:51.530125 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:20:51.530135 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:20:51.530146 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:20:51.530156 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:20:51.530167 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:20:51.530179 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:20:51.530189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:20:51.530200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:20:51.530210 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:20:51.530220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:20:51.530231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:20:51.530245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:20:51.530255 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:20:51.530266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:20:51.530278 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:20:51.530289 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:20:51.530300 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:20:51.530312 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:20:51.530323 kernel: fuse: init (API version 7.39) Jan 13 21:20:51.530333 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:20:51.530344 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:20:51.530354 kernel: loop: module loaded Jan 13 21:20:51.530365 kernel: ACPI: bus type drm_connector registered Jan 13 21:20:51.530376 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:20:51.530387 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:20:51.530398 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:20:51.530410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:20:51.530435 systemd-journald[1149]: Collecting audit messages is disabled. Jan 13 21:20:51.530458 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:20:51.530468 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:20:51.530481 systemd-journald[1149]: Journal started Jan 13 21:20:51.530502 systemd-journald[1149]: Runtime Journal (/run/log/journal/f9c751ac1feb43b9ba4f96c7830237d0) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:20:51.532942 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:20:51.533928 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:20:51.534797 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:20:51.535864 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:20:51.537176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:20:51.538292 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:20:51.538454 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:20:51.539581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:20:51.539733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:20:51.540815 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:20:51.541121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:20:51.542140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:20:51.542289 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:20:51.543394 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:20:51.543547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:20:51.544599 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:20:51.544820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:20:51.546214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:20:51.547549 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:20:51.548756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:20:51.559811 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:20:51.573969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:20:51.575758 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:20:51.576605 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:20:51.580757 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:20:51.584994 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:20:51.585912 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:20:51.587334 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:20:51.590715 systemd-journald[1149]: Time spent on flushing to /var/log/journal/f9c751ac1feb43b9ba4f96c7830237d0 is 16.046ms for 840 entries. Jan 13 21:20:51.590715 systemd-journald[1149]: System Journal (/var/log/journal/f9c751ac1feb43b9ba4f96c7830237d0) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:20:51.614472 systemd-journald[1149]: Received client request to flush runtime journal. Jan 13 21:20:51.588257 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:20:51.591155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:20:51.597052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:20:51.599303 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:20:51.601767 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:20:51.602768 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:20:51.603893 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:20:51.605961 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:20:51.614848 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:20:51.618179 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:20:51.625424 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:20:51.626798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:20:51.629929 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 21:20:51.629946 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 21:20:51.633809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:20:51.641110 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:20:51.659227 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:20:51.671077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:20:51.682052 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 13 21:20:51.682072 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 13 21:20:51.685664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:20:52.014378 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:20:52.030044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:20:52.049337 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Jan 13 21:20:52.062844 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:20:52.078072 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:20:52.096536 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1225) Jan 13 21:20:52.096157 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:20:52.097449 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 13 21:20:52.145273 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:20:52.171361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:20:52.190145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:20:52.198100 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:20:52.215080 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:20:52.231340 systemd-networkd[1226]: lo: Link UP Jan 13 21:20:52.231349 systemd-networkd[1226]: lo: Gained carrier Jan 13 21:20:52.232470 systemd-networkd[1226]: Enumeration completed Jan 13 21:20:52.232635 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:20:52.233044 systemd-networkd[1226]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:20:52.233047 systemd-networkd[1226]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:20:52.233721 systemd-networkd[1226]: eth0: Link UP Jan 13 21:20:52.233724 systemd-networkd[1226]: eth0: Gained carrier Jan 13 21:20:52.233735 systemd-networkd[1226]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:20:52.235158 lvm[1254]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:20:52.244075 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:20:52.247694 systemd-networkd[1226]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:20:52.249984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:52.264540 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:20:52.265766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:20:52.273145 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:20:52.276901 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:20:52.307313 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:20:52.308427 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:20:52.309398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:20:52.309428 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:20:52.310181 systemd[1]: Reached target machines.target - Containers. Jan 13 21:20:52.311841 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:20:52.323042 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:20:52.325029 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:20:52.325863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:20:52.326723 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:20:52.328607 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:20:52.331099 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:20:52.332564 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:20:52.341536 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:20:52.346978 kernel: loop0: detected capacity change from 0 to 114328 Jan 13 21:20:52.354198 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:20:52.354850 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:20:52.357919 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:20:52.405927 kernel: loop1: detected capacity change from 0 to 114432 Jan 13 21:20:52.441931 kernel: loop2: detected capacity change from 0 to 194512 Jan 13 21:20:52.475346 kernel: loop3: detected capacity change from 0 to 114328 Jan 13 21:20:52.479943 kernel: loop4: detected capacity change from 0 to 114432 Jan 13 21:20:52.486913 kernel: loop5: detected capacity change from 0 to 194512 Jan 13 21:20:52.490773 (sd-merge)[1285]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:20:52.491176 (sd-merge)[1285]: Merged extensions into '/usr'. Jan 13 21:20:52.494081 systemd[1]: Reloading requested from client PID 1272 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:20:52.494096 systemd[1]: Reloading... Jan 13 21:20:52.531927 zram_generator::config[1317]: No configuration found. Jan 13 21:20:52.560813 ldconfig[1269]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:20:52.622490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:20:52.664528 systemd[1]: Reloading finished in 170 ms. Jan 13 21:20:52.684720 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:20:52.685946 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:20:52.701075 systemd[1]: Starting ensure-sysext.service... Jan 13 21:20:52.702808 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:20:52.705970 systemd[1]: Reloading requested from client PID 1355 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:20:52.705986 systemd[1]: Reloading... Jan 13 21:20:52.718306 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:20:52.718564 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:20:52.719214 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:20:52.719437 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 13 21:20:52.719487 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 13 21:20:52.721760 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:20:52.721775 systemd-tmpfiles[1356]: Skipping /boot Jan 13 21:20:52.728606 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:20:52.728622 systemd-tmpfiles[1356]: Skipping /boot Jan 13 21:20:52.752968 zram_generator::config[1384]: No configuration found. Jan 13 21:20:52.838057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:20:52.880232 systemd[1]: Reloading finished in 173 ms. Jan 13 21:20:52.893701 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:20:52.915518 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:20:52.917744 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:20:52.919759 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:20:52.923118 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:20:52.927054 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:20:52.933669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:20:52.935148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:20:52.938862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:20:52.943237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:20:52.944680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:20:52.948587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:20:52.948852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:20:52.954153 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:20:52.958665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:20:52.958809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:20:52.960572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:20:52.960709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:20:52.965630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:20:52.972188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:20:52.974837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:20:52.974912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:20:52.977098 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:20:52.978596 systemd[1]: Finished ensure-sysext.service. Jan 13 21:20:52.979661 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:20:52.981128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:20:52.981292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:20:52.981359 augenrules[1463]: No rules Jan 13 21:20:52.982812 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:20:52.984195 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:20:52.986149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:20:52.993313 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:20:52.994600 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:20:52.996687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:20:53.004086 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:20:53.004880 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:20:53.015091 systemd-resolved[1430]: Positive Trust Anchors: Jan 13 21:20:53.015105 systemd-resolved[1430]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:20:53.015137 systemd-resolved[1430]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:20:53.020797 systemd-resolved[1430]: Defaulting to hostname 'linux'. Jan 13 21:20:53.026191 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:20:53.027079 systemd[1]: Reached target network.target - Network. Jan 13 21:20:53.027710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:20:53.049448 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:20:53.489323 systemd-resolved[1430]: Clock change detected. Flushing caches. Jan 13 21:20:53.489372 systemd-timesyncd[1478]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:20:53.489416 systemd-timesyncd[1478]: Initial clock synchronization to Mon 2025-01-13 21:20:53.489280 UTC. Jan 13 21:20:53.489559 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:20:53.490443 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:20:53.491370 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:20:53.492278 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:20:53.493188 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:20:53.493222 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:20:53.493887 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:20:53.494740 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:20:53.495629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:20:53.496513 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:20:53.497888 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:20:53.500077 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:20:53.502061 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:20:53.512582 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:20:53.513420 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:20:53.514140 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:20:53.514939 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:20:53.514986 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:20:53.515006 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:20:53.516090 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:20:53.517858 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:20:53.519475 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:20:53.523792 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:20:53.524604 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:20:53.525590 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:20:53.532084 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:20:53.533484 jq[1484]: false Jan 13 21:20:53.547690 extend-filesystems[1485]: Found loop3 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found loop4 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found loop5 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda1 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda2 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda3 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found usr Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda4 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda6 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda7 Jan 13 21:20:53.553134 extend-filesystems[1485]: Found vda9 Jan 13 21:20:53.553134 extend-filesystems[1485]: Checking size of /dev/vda9 Jan 13 21:20:53.552800 dbus-daemon[1483]: [system] SELinux support is enabled Jan 13 21:20:53.549820 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:20:53.554781 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:20:53.565156 extend-filesystems[1485]: Resized partition /dev/vda9 Jan 13 21:20:53.569033 extend-filesystems[1509]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:20:53.578273 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:20:53.578336 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1222) Jan 13 21:20:53.572800 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:20:53.577920 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:20:53.583865 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:20:53.587799 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:20:53.589439 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:20:53.596671 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:20:53.600021 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:20:53.600275 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:20:53.600511 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:20:53.600755 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:20:53.602364 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:20:53.602578 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:20:53.612870 jq[1512]: true Jan 13 21:20:53.615254 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:20:53.615254 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:20:53.615254 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:20:53.617822 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jan 13 21:20:53.626184 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:20:53.626442 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:20:53.635670 jq[1524]: true Jan 13 21:20:53.647473 update_engine[1510]: I20250113 21:20:53.646904 1510 main.cc:92] Flatcar Update Engine starting Jan 13 21:20:53.650183 tar[1514]: linux-arm64/helm Jan 13 21:20:53.650420 update_engine[1510]: I20250113 21:20:53.649804 1510 update_check_scheduler.cc:74] Next update check in 3m30s Jan 13 21:20:53.650274 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:20:53.651597 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:20:53.653209 systemd-logind[1504]: New seat seat0. Jan 13 21:20:53.654801 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:20:53.655698 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:20:53.658307 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:20:53.658469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:20:53.659490 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:20:53.659600 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:20:53.661760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:20:53.666895 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:20:53.707885 bash[1547]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:20:53.710194 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:20:53.711816 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:20:53.721595 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:20:53.850719 containerd[1526]: time="2025-01-13T21:20:53.850588317Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:20:53.875868 systemd-networkd[1226]: eth0: Gained IPv6LL Jan 13 21:20:53.878572 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:20:53.880072 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:20:53.885608 containerd[1526]: time="2025-01-13T21:20:53.885451917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:20:53.887200 containerd[1526]: time="2025-01-13T21:20:53.887154237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:20:53.887200 containerd[1526]: time="2025-01-13T21:20:53.887198717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:20:53.887282 containerd[1526]: time="2025-01-13T21:20:53.887215197Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.887415397Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.887443597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.887597997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.887615557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.887963677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.887982757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.887996957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.888016757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888214 containerd[1526]: time="2025-01-13T21:20:53.888097757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888388 containerd[1526]: time="2025-01-13T21:20:53.888269037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888407 containerd[1526]: time="2025-01-13T21:20:53.888390317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:20:53.888407 containerd[1526]: time="2025-01-13T21:20:53.888404397Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:20:53.888501 containerd[1526]: time="2025-01-13T21:20:53.888477397Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:20:53.888553 containerd[1526]: time="2025-01-13T21:20:53.888539997Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:20:53.891892 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:20:53.893499 containerd[1526]: time="2025-01-13T21:20:53.892486037Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:20:53.893499 containerd[1526]: time="2025-01-13T21:20:53.892583637Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:20:53.893499 containerd[1526]: time="2025-01-13T21:20:53.892603397Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:20:53.893499 containerd[1526]: time="2025-01-13T21:20:53.892619437Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:20:53.893499 containerd[1526]: time="2025-01-13T21:20:53.892733637Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:20:53.893499 containerd[1526]: time="2025-01-13T21:20:53.892888517Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:20:53.894966 containerd[1526]: time="2025-01-13T21:20:53.894928877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895086237Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895115677Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895134757Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895164477Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895182357Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895198597Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895218837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895237917Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895254837Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895270637Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895285797Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895311757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895337237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896678 containerd[1526]: time="2025-01-13T21:20:53.895354477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895368477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895386997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895403877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895419677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895436437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895453077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895482557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895496677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895512997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895528677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895549557Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895579397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895595757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.896987 containerd[1526]: time="2025-01-13T21:20:53.895611037Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:20:53.896959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:20:53.899031 containerd[1526]: time="2025-01-13T21:20:53.899001797Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:20:53.899097 containerd[1526]: time="2025-01-13T21:20:53.899035277Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:20:53.899097 containerd[1526]: time="2025-01-13T21:20:53.899047357Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:20:53.899097 containerd[1526]: time="2025-01-13T21:20:53.899059397Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:20:53.899097 containerd[1526]: time="2025-01-13T21:20:53.899069117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.899097 containerd[1526]: time="2025-01-13T21:20:53.899081957Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:20:53.899097 containerd[1526]: time="2025-01-13T21:20:53.899093037Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:20:53.899218 containerd[1526]: time="2025-01-13T21:20:53.899103717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:20:53.899717 containerd[1526]: time="2025-01-13T21:20:53.899443757Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:20:53.899717 containerd[1526]: time="2025-01-13T21:20:53.899531037Z" level=info msg="Connect containerd service" Jan 13 21:20:53.899717 containerd[1526]: time="2025-01-13T21:20:53.899626757Z" level=info msg="using legacy CRI server" Jan 13 21:20:53.899717 containerd[1526]: time="2025-01-13T21:20:53.899652957Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:20:53.899922 containerd[1526]: time="2025-01-13T21:20:53.899751477Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:20:53.900669 containerd[1526]: time="2025-01-13T21:20:53.900321957Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:20:53.900729 containerd[1526]: time="2025-01-13T21:20:53.900588037Z" level=info msg="Start subscribing containerd event" Jan 13 21:20:53.900729 containerd[1526]: time="2025-01-13T21:20:53.900707237Z" level=info msg="Start recovering state" Jan 13 21:20:53.900897 containerd[1526]: time="2025-01-13T21:20:53.900770357Z" level=info msg="Start event monitor" Jan 13 21:20:53.900897 containerd[1526]: time="2025-01-13T21:20:53.900781197Z" level=info msg="Start snapshots syncer" Jan 13 21:20:53.900897 containerd[1526]: time="2025-01-13T21:20:53.900788877Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:20:53.900897 containerd[1526]: time="2025-01-13T21:20:53.900795797Z" level=info msg="Start streaming server" Jan 13 21:20:53.901280 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:20:53.902224 containerd[1526]: time="2025-01-13T21:20:53.902181477Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:20:53.902287 containerd[1526]: time="2025-01-13T21:20:53.902242917Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:20:53.907139 containerd[1526]: time="2025-01-13T21:20:53.905671277Z" level=info msg="containerd successfully booted in 0.056504s" Jan 13 21:20:53.908949 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:20:53.927653 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:20:53.927899 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:20:53.930846 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:20:53.933479 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:20:53.998124 tar[1514]: linux-arm64/LICENSE Jan 13 21:20:53.998124 tar[1514]: linux-arm64/README.md Jan 13 21:20:54.011951 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:20:54.176758 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:20:54.195533 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:20:54.209884 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:20:54.215232 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:20:54.215458 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:20:54.217987 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:20:54.230651 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:20:54.242970 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:20:54.244813 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:20:54.245849 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:20:54.392362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:20:54.393709 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:20:54.394663 systemd[1]: Startup finished in 4.849s (kernel) + 2.996s (userspace) = 7.846s. Jan 13 21:20:54.396008 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:20:54.855197 kubelet[1620]: E0113 21:20:54.855108 1620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:20:54.857658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:20:54.857848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:20:59.732204 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:20:59.740875 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:44284.service - OpenSSH per-connection server daemon (10.0.0.1:44284). Jan 13 21:20:59.795003 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 44284 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:20:59.796947 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:59.808966 systemd-logind[1504]: New session 1 of user core. Jan 13 21:20:59.809842 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:20:59.816859 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:20:59.826196 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:20:59.828681 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:20:59.834975 (systemd)[1640]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:20:59.930746 systemd[1640]: Queued start job for default target default.target. Jan 13 21:20:59.931104 systemd[1640]: Created slice app.slice - User Application Slice. Jan 13 21:20:59.931122 systemd[1640]: Reached target paths.target - Paths. Jan 13 21:20:59.931134 systemd[1640]: Reached target timers.target - Timers. Jan 13 21:20:59.940795 systemd[1640]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:20:59.946931 systemd[1640]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:20:59.947091 systemd[1640]: Reached target sockets.target - Sockets. Jan 13 21:20:59.947107 systemd[1640]: Reached target basic.target - Basic System. Jan 13 21:20:59.947145 systemd[1640]: Reached target default.target - Main User Target. Jan 13 21:20:59.947170 systemd[1640]: Startup finished in 106ms. Jan 13 21:20:59.947384 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:20:59.949186 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:21:00.023942 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:44300.service - OpenSSH per-connection server daemon (10.0.0.1:44300). Jan 13 21:21:00.061783 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 44300 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:21:00.063232 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:00.067947 systemd-logind[1504]: New session 2 of user core. Jan 13 21:21:00.074936 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:21:00.129323 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:00.141934 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:44312.service - OpenSSH per-connection server daemon (10.0.0.1:44312). Jan 13 21:21:00.142355 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:44300.service: Deactivated successfully. Jan 13 21:21:00.147678 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:21:00.148088 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:21:00.149360 systemd-logind[1504]: Removed session 2. Jan 13 21:21:00.171090 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 44312 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:21:00.172429 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:00.177789 systemd-logind[1504]: New session 3 of user core. Jan 13 21:21:00.191964 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:21:00.240617 sshd[1657]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:00.251979 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:44326.service - OpenSSH per-connection server daemon (10.0.0.1:44326). Jan 13 21:21:00.252367 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:44312.service: Deactivated successfully. Jan 13 21:21:00.254310 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:21:00.254772 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:21:00.256145 systemd-logind[1504]: Removed session 3. Jan 13 21:21:00.280330 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 44326 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:21:00.281629 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:00.285332 systemd-logind[1504]: New session 4 of user core. Jan 13 21:21:00.296419 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:21:00.353447 sshd[1665]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:00.356173 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:44326.service: Deactivated successfully. Jan 13 21:21:00.359834 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:21:00.368967 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:44328.service - OpenSSH per-connection server daemon (10.0.0.1:44328). Jan 13 21:21:00.369335 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:21:00.371051 systemd-logind[1504]: Removed session 4. Jan 13 21:21:00.398428 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 44328 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:21:00.399842 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:00.403704 systemd-logind[1504]: New session 5 of user core. Jan 13 21:21:00.413915 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:21:00.476799 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:21:00.477071 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:00.495486 sudo[1680]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:00.499163 sshd[1676]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:00.510888 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:44334.service - OpenSSH per-connection server daemon (10.0.0.1:44334). Jan 13 21:21:00.511256 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:44328.service: Deactivated successfully. Jan 13 21:21:00.512994 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:21:00.513565 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:21:00.514994 systemd-logind[1504]: Removed session 5. Jan 13 21:21:00.538940 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 44334 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:21:00.540452 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:00.544408 systemd-logind[1504]: New session 6 of user core. Jan 13 21:21:00.555022 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:21:00.607228 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:21:00.607852 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:00.611030 sudo[1690]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:00.615556 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:21:00.616119 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:00.631927 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:00.633496 auditctl[1693]: No rules Jan 13 21:21:00.634301 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:21:00.634545 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:00.636313 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:00.664370 augenrules[1712]: No rules Jan 13 21:21:00.665711 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:00.666793 sudo[1689]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:00.668530 sshd[1682]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:00.674897 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:44336.service - OpenSSH per-connection server daemon (10.0.0.1:44336). Jan 13 21:21:00.677306 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:44334.service: Deactivated successfully. Jan 13 21:21:00.680171 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:21:00.680613 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:21:00.682260 systemd-logind[1504]: Removed session 6. Jan 13 21:21:00.703290 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 44336 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:21:00.704697 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:00.709704 systemd-logind[1504]: New session 7 of user core. Jan 13 21:21:00.717962 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:21:00.771002 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:21:00.771273 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:01.098885 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:21:01.099037 (dockerd)[1744]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:21:01.376132 dockerd[1744]: time="2025-01-13T21:21:01.375310237Z" level=info msg="Starting up" Jan 13 21:21:01.618165 dockerd[1744]: time="2025-01-13T21:21:01.618093557Z" level=info msg="Loading containers: start." Jan 13 21:21:01.715665 kernel: Initializing XFRM netlink socket Jan 13 21:21:01.780968 systemd-networkd[1226]: docker0: Link UP Jan 13 21:21:01.799136 dockerd[1744]: time="2025-01-13T21:21:01.798969717Z" level=info msg="Loading containers: done." Jan 13 21:21:01.816530 dockerd[1744]: time="2025-01-13T21:21:01.816114077Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:21:01.816530 dockerd[1744]: time="2025-01-13T21:21:01.816227717Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:21:01.816530 dockerd[1744]: time="2025-01-13T21:21:01.816331597Z" level=info msg="Daemon has completed initialization" Jan 13 21:21:01.847152 dockerd[1744]: time="2025-01-13T21:21:01.847007917Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:21:01.847819 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:21:02.510808 containerd[1526]: time="2025-01-13T21:21:02.510705077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:21:03.270090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931779094.mount: Deactivated successfully. Jan 13 21:21:05.108179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:21:05.119834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:05.208693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:05.213621 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:21:05.261423 kubelet[1969]: E0113 21:21:05.261274 1969 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:21:05.265195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:21:05.265385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:21:05.574382 containerd[1526]: time="2025-01-13T21:21:05.574275357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:05.577178 containerd[1526]: time="2025-01-13T21:21:05.577144557Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Jan 13 21:21:05.578096 containerd[1526]: time="2025-01-13T21:21:05.578058197Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:05.581068 containerd[1526]: time="2025-01-13T21:21:05.581033837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:05.582535 containerd[1526]: time="2025-01-13T21:21:05.582289917Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 3.07154276s" Jan 13 21:21:05.582535 containerd[1526]: time="2025-01-13T21:21:05.582334717Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 21:21:05.599952 containerd[1526]: time="2025-01-13T21:21:05.599924557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:21:08.037495 containerd[1526]: time="2025-01-13T21:21:08.037442797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:08.039149 containerd[1526]: time="2025-01-13T21:21:08.039119597Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Jan 13 21:21:08.040098 containerd[1526]: time="2025-01-13T21:21:08.040046917Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:08.042753 containerd[1526]: time="2025-01-13T21:21:08.042708877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:08.043831 containerd[1526]: time="2025-01-13T21:21:08.043790677Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.44368004s" Jan 13 21:21:08.043881 containerd[1526]: time="2025-01-13T21:21:08.043831877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 21:21:08.063281 containerd[1526]: time="2025-01-13T21:21:08.063249757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:21:09.648805 containerd[1526]: time="2025-01-13T21:21:09.648745957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:09.649293 containerd[1526]: time="2025-01-13T21:21:09.649256917Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Jan 13 21:21:09.650205 containerd[1526]: time="2025-01-13T21:21:09.650174277Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:09.653013 containerd[1526]: time="2025-01-13T21:21:09.652965477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:09.654320 containerd[1526]: time="2025-01-13T21:21:09.654279877Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.59099196s" Jan 13 21:21:09.654320 containerd[1526]: time="2025-01-13T21:21:09.654314517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 21:21:09.671920 containerd[1526]: time="2025-01-13T21:21:09.671888677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:21:10.662159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119149606.mount: Deactivated successfully. Jan 13 21:21:10.968858 containerd[1526]: time="2025-01-13T21:21:10.968387357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:10.969589 containerd[1526]: time="2025-01-13T21:21:10.969157157Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Jan 13 21:21:10.970267 containerd[1526]: time="2025-01-13T21:21:10.970213917Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:10.973278 containerd[1526]: time="2025-01-13T21:21:10.973229317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:10.973932 containerd[1526]: time="2025-01-13T21:21:10.973897917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.3019656s" Jan 13 21:21:10.973997 containerd[1526]: time="2025-01-13T21:21:10.973935677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 21:21:10.991810 containerd[1526]: time="2025-01-13T21:21:10.991712997Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:21:11.590496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371484617.mount: Deactivated successfully. Jan 13 21:21:12.148146 containerd[1526]: time="2025-01-13T21:21:12.148097677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:12.149670 containerd[1526]: time="2025-01-13T21:21:12.149618837Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 21:21:12.150876 containerd[1526]: time="2025-01-13T21:21:12.150813837Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:12.154462 containerd[1526]: time="2025-01-13T21:21:12.154412397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:12.155231 containerd[1526]: time="2025-01-13T21:21:12.155199317Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.16344896s" Jan 13 21:21:12.155294 containerd[1526]: time="2025-01-13T21:21:12.155231837Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:21:12.173249 containerd[1526]: time="2025-01-13T21:21:12.173187757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:21:12.653854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422035814.mount: Deactivated successfully. Jan 13 21:21:12.658888 containerd[1526]: time="2025-01-13T21:21:12.658718237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:12.659395 containerd[1526]: time="2025-01-13T21:21:12.659157757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 21:21:12.660112 containerd[1526]: time="2025-01-13T21:21:12.660044677Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:12.662284 containerd[1526]: time="2025-01-13T21:21:12.662236077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:12.663200 containerd[1526]: time="2025-01-13T21:21:12.663118197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 489.89436ms" Jan 13 21:21:12.663200 containerd[1526]: time="2025-01-13T21:21:12.663149837Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 21:21:12.681890 containerd[1526]: time="2025-01-13T21:21:12.681852557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:21:13.316381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40579558.mount: Deactivated successfully. Jan 13 21:21:15.515622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:21:15.523960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:15.612906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:15.617282 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:21:15.658900 kubelet[2137]: E0113 21:21:15.658748 2137 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:21:15.661823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:21:15.661964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:21:15.731509 containerd[1526]: time="2025-01-13T21:21:15.731457357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:15.732711 containerd[1526]: time="2025-01-13T21:21:15.732618397Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 13 21:21:15.734500 containerd[1526]: time="2025-01-13T21:21:15.734461597Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:15.738782 containerd[1526]: time="2025-01-13T21:21:15.738114357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:15.739523 containerd[1526]: time="2025-01-13T21:21:15.739480317Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.05758348s" Jan 13 21:21:15.739575 containerd[1526]: time="2025-01-13T21:21:15.739524557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 21:21:21.410417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:21.420883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:21.435588 systemd[1]: Reloading requested from client PID 2233 ('systemctl') (unit session-7.scope)... Jan 13 21:21:21.435607 systemd[1]: Reloading... Jan 13 21:21:21.507663 zram_generator::config[2272]: No configuration found. Jan 13 21:21:21.659150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:21.708274 systemd[1]: Reloading finished in 272 ms. Jan 13 21:21:21.742775 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:21.746314 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:21:21.746574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:21.759955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:21.841733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:21.845728 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:21:21.884057 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:21.884057 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:21:21.884057 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:21.884413 kubelet[2332]: I0113 21:21:21.884096 2332 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:21:22.611313 kubelet[2332]: I0113 21:21:22.611255 2332 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:21:22.611313 kubelet[2332]: I0113 21:21:22.611290 2332 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:21:22.611522 kubelet[2332]: I0113 21:21:22.611495 2332 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:21:22.650183 kubelet[2332]: I0113 21:21:22.650152 2332 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:21:22.651429 kubelet[2332]: E0113 21:21:22.651354 2332 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.658971 kubelet[2332]: I0113 21:21:22.658936 2332 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:21:22.659288 kubelet[2332]: I0113 21:21:22.659260 2332 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:21:22.659462 kubelet[2332]: I0113 21:21:22.659441 2332 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:21:22.659462 kubelet[2332]: I0113 21:21:22.659460 2332 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:21:22.659565 kubelet[2332]: I0113 21:21:22.659468 2332 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:21:22.660552 kubelet[2332]: I0113 21:21:22.660511 2332 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:22.662653 kubelet[2332]: I0113 21:21:22.662616 2332 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:21:22.662653 kubelet[2332]: I0113 21:21:22.662654 2332 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:21:22.662864 kubelet[2332]: I0113 21:21:22.662681 2332 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:21:22.662864 kubelet[2332]: I0113 21:21:22.662692 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:21:22.663114 kubelet[2332]: W0113 21:21:22.663043 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.663114 kubelet[2332]: E0113 21:21:22.663103 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.664299 kubelet[2332]: W0113 21:21:22.664245 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.664299 kubelet[2332]: E0113 21:21:22.664287 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.666427 kubelet[2332]: I0113 21:21:22.666384 2332 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:21:22.666875 kubelet[2332]: I0113 21:21:22.666861 2332 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:21:22.667708 kubelet[2332]: W0113 21:21:22.667681 2332 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:21:22.668527 kubelet[2332]: I0113 21:21:22.668480 2332 server.go:1256] "Started kubelet" Jan 13 21:21:22.668566 kubelet[2332]: I0113 21:21:22.668538 2332 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:21:22.669243 kubelet[2332]: I0113 21:21:22.668997 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:21:22.669296 kubelet[2332]: I0113 21:21:22.669253 2332 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:21:22.669776 kubelet[2332]: I0113 21:21:22.669397 2332 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:21:22.669895 kubelet[2332]: I0113 21:21:22.669867 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:21:22.672924 kubelet[2332]: I0113 21:21:22.670531 2332 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:21:22.672924 kubelet[2332]: I0113 21:21:22.670615 2332 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:21:22.672924 kubelet[2332]: I0113 21:21:22.670676 2332 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:21:22.672924 kubelet[2332]: W0113 21:21:22.670913 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.672924 kubelet[2332]: E0113 21:21:22.670947 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.672924 kubelet[2332]: E0113 21:21:22.671162 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Jan 13 21:21:22.672924 kubelet[2332]: I0113 21:21:22.671656 2332 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:21:22.672924 kubelet[2332]: I0113 21:21:22.671751 2332 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:21:22.677192 kubelet[2332]: I0113 21:21:22.677159 2332 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:21:22.678095 kubelet[2332]: E0113 21:21:22.678070 2332 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d62342f7c5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:21:22.668452957 +0000 UTC m=+0.819404761,LastTimestamp:2025-01-13 21:21:22.668452957 +0000 UTC m=+0.819404761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:21:22.678272 kubelet[2332]: E0113 21:21:22.678171 2332 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:21:22.687843 kubelet[2332]: I0113 21:21:22.687803 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:21:22.688806 kubelet[2332]: I0113 21:21:22.688778 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:21:22.688851 kubelet[2332]: I0113 21:21:22.688808 2332 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:21:22.688851 kubelet[2332]: I0113 21:21:22.688827 2332 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:21:22.688900 kubelet[2332]: E0113 21:21:22.688875 2332 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:21:22.692332 kubelet[2332]: W0113 21:21:22.692224 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.692332 kubelet[2332]: E0113 21:21:22.692264 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:22.697994 kubelet[2332]: I0113 21:21:22.697976 2332 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:21:22.698349 kubelet[2332]: I0113 21:21:22.698109 2332 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:21:22.698349 kubelet[2332]: I0113 21:21:22.698128 2332 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:22.771854 kubelet[2332]: I0113 21:21:22.771830 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:22.772272 kubelet[2332]: E0113 21:21:22.772253 2332 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 13 21:21:22.775757 kubelet[2332]: I0113 21:21:22.775715 2332 policy_none.go:49] "None policy: Start" Jan 13 21:21:22.776421 kubelet[2332]: I0113 21:21:22.776338 2332 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:21:22.776648 kubelet[2332]: I0113 21:21:22.776557 2332 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:21:22.781088 kubelet[2332]: I0113 21:21:22.781065 2332 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:21:22.781717 kubelet[2332]: I0113 21:21:22.781394 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:21:22.782821 kubelet[2332]: E0113 21:21:22.782755 2332 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:21:22.790001 kubelet[2332]: I0113 21:21:22.789981 2332 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:21:22.790936 kubelet[2332]: I0113 21:21:22.790912 2332 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:21:22.792672 kubelet[2332]: I0113 21:21:22.791696 2332 topology_manager.go:215] "Topology Admit Handler" podUID="55b8026a8260dd831b63b4f46a2330e8" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:21:22.872291 kubelet[2332]: E0113 21:21:22.872207 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Jan 13 21:21:22.945819 kubelet[2332]: E0113 21:21:22.945779 2332 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d62342f7c5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:21:22.668452957 +0000 UTC m=+0.819404761,LastTimestamp:2025-01-13 21:21:22.668452957 +0000 UTC m=+0.819404761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:21:22.972035 kubelet[2332]: I0113 21:21:22.971986 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55b8026a8260dd831b63b4f46a2330e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"55b8026a8260dd831b63b4f46a2330e8\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:22.972035 kubelet[2332]: I0113 21:21:22.972031 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55b8026a8260dd831b63b4f46a2330e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"55b8026a8260dd831b63b4f46a2330e8\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:22.972379 kubelet[2332]: I0113 21:21:22.972052 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:22.972379 kubelet[2332]: I0113 21:21:22.972075 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:22.972379 kubelet[2332]: I0113 21:21:22.972131 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:22.972379 kubelet[2332]: I0113 21:21:22.972214 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55b8026a8260dd831b63b4f46a2330e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"55b8026a8260dd831b63b4f46a2330e8\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:22.972379 kubelet[2332]: I0113 21:21:22.972277 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:22.972697 kubelet[2332]: I0113 21:21:22.972319 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:22.972697 kubelet[2332]: I0113 21:21:22.972355 2332 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:21:22.973442 kubelet[2332]: I0113 21:21:22.973375 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:22.973859 kubelet[2332]: E0113 21:21:22.973837 2332 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 13 21:21:23.098453 kubelet[2332]: E0113 21:21:23.098418 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:23.098938 kubelet[2332]: E0113 21:21:23.098423 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:23.099137 containerd[1526]: time="2025-01-13T21:21:23.099099037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:23.099458 containerd[1526]: time="2025-01-13T21:21:23.099106717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:55b8026a8260dd831b63b4f46a2330e8,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:23.099950 kubelet[2332]: E0113 21:21:23.099796 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:23.100305 containerd[1526]: time="2025-01-13T21:21:23.100272437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:23.273659 kubelet[2332]: E0113 21:21:23.273070 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Jan 13 21:21:23.375543 kubelet[2332]: I0113 21:21:23.375514 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:23.376121 kubelet[2332]: E0113 21:21:23.376075 2332 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jan 13 21:21:23.577841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795955746.mount: Deactivated successfully. Jan 13 21:21:23.582125 containerd[1526]: time="2025-01-13T21:21:23.582063557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:23.584070 containerd[1526]: time="2025-01-13T21:21:23.584023797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:21:23.584803 containerd[1526]: time="2025-01-13T21:21:23.584754717Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:23.585601 containerd[1526]: time="2025-01-13T21:21:23.585573677Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:23.586260 containerd[1526]: time="2025-01-13T21:21:23.586235677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:21:23.586775 containerd[1526]: time="2025-01-13T21:21:23.586748117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:21:23.587405 containerd[1526]: time="2025-01-13T21:21:23.587372997Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:23.592008 containerd[1526]: time="2025-01-13T21:21:23.591974837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:23.592936 containerd[1526]: time="2025-01-13T21:21:23.592906837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.59864ms" Jan 13 21:21:23.593597 containerd[1526]: time="2025-01-13T21:21:23.593567997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.8412ms" Jan 13 21:21:23.596398 containerd[1526]: time="2025-01-13T21:21:23.596292397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.10584ms" Jan 13 21:21:23.661525 kubelet[2332]: W0113 21:21:23.661478 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:23.661525 kubelet[2332]: E0113 21:21:23.661521 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:23.715806 kubelet[2332]: W0113 21:21:23.715752 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:23.715806 kubelet[2332]: E0113 21:21:23.715811 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:23.731898 containerd[1526]: time="2025-01-13T21:21:23.731776157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:23.731898 containerd[1526]: time="2025-01-13T21:21:23.731848477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:23.731898 containerd[1526]: time="2025-01-13T21:21:23.731864277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:23.732056 containerd[1526]: time="2025-01-13T21:21:23.731985157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:23.732056 containerd[1526]: time="2025-01-13T21:21:23.732041637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:23.732096 containerd[1526]: time="2025-01-13T21:21:23.732068477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:23.732294 containerd[1526]: time="2025-01-13T21:21:23.732170557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:23.732376 containerd[1526]: time="2025-01-13T21:21:23.732310237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:23.732475 containerd[1526]: time="2025-01-13T21:21:23.732370437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:23.732475 containerd[1526]: time="2025-01-13T21:21:23.732389437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:23.732541 containerd[1526]: time="2025-01-13T21:21:23.732466597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:23.733123 containerd[1526]: time="2025-01-13T21:21:23.732998477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:23.777823 containerd[1526]: time="2025-01-13T21:21:23.777182237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"260374e31f1bdd159ab960e34f01d18e152fc4780bff3cf6d88b8f4709055750\"" Jan 13 21:21:23.778815 containerd[1526]: time="2025-01-13T21:21:23.778789717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:55b8026a8260dd831b63b4f46a2330e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d27ecc107012c9e90b7f261a144681d52efa2da7547d043daa1d7b0a97cbe07\"" Jan 13 21:21:23.778924 kubelet[2332]: E0113 21:21:23.778898 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:23.779604 containerd[1526]: time="2025-01-13T21:21:23.779564157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b88970e4cdbac7c93ea575cb3da5b2267509a32f0370d1749cefd0f9a4ab1e37\"" Jan 13 21:21:23.779960 kubelet[2332]: E0113 21:21:23.779941 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:23.780911 kubelet[2332]: E0113 21:21:23.780893 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:23.781613 containerd[1526]: time="2025-01-13T21:21:23.781583917Z" level=info msg="CreateContainer within sandbox \"260374e31f1bdd159ab960e34f01d18e152fc4780bff3cf6d88b8f4709055750\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:21:23.782406 containerd[1526]: time="2025-01-13T21:21:23.782295557Z" level=info msg="CreateContainer within sandbox \"2d27ecc107012c9e90b7f261a144681d52efa2da7547d043daa1d7b0a97cbe07\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:21:23.782971 containerd[1526]: time="2025-01-13T21:21:23.782946477Z" level=info msg="CreateContainer within sandbox \"b88970e4cdbac7c93ea575cb3da5b2267509a32f0370d1749cefd0f9a4ab1e37\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:21:23.797365 containerd[1526]: time="2025-01-13T21:21:23.797320757Z" level=info msg="CreateContainer within sandbox \"260374e31f1bdd159ab960e34f01d18e152fc4780bff3cf6d88b8f4709055750\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6011c7321704cf30a5edefc6c757625172cc183bbac1a85ed4cf4eaca134609e\"" Jan 13 21:21:23.798095 containerd[1526]: time="2025-01-13T21:21:23.798038917Z" level=info msg="StartContainer for \"6011c7321704cf30a5edefc6c757625172cc183bbac1a85ed4cf4eaca134609e\"" Jan 13 21:21:23.800922 containerd[1526]: time="2025-01-13T21:21:23.800886637Z" level=info msg="CreateContainer within sandbox \"b88970e4cdbac7c93ea575cb3da5b2267509a32f0370d1749cefd0f9a4ab1e37\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe86543a06d046cb96c54cf2c7f9c19df5ace497b7e009fa097fccb50a3517d6\"" Jan 13 21:21:23.801581 containerd[1526]: time="2025-01-13T21:21:23.801544917Z" level=info msg="StartContainer for \"fe86543a06d046cb96c54cf2c7f9c19df5ace497b7e009fa097fccb50a3517d6\"" Jan 13 21:21:23.802774 containerd[1526]: time="2025-01-13T21:21:23.802676197Z" level=info msg="CreateContainer within sandbox \"2d27ecc107012c9e90b7f261a144681d52efa2da7547d043daa1d7b0a97cbe07\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5dec655ad3c29aff30c4045b5cee231ae55a0d7ad3fdf2493b1aeae69c2f5e54\"" Jan 13 21:21:23.803405 containerd[1526]: time="2025-01-13T21:21:23.803379877Z" level=info msg="StartContainer for \"5dec655ad3c29aff30c4045b5cee231ae55a0d7ad3fdf2493b1aeae69c2f5e54\"" Jan 13 21:21:23.854642 containerd[1526]: time="2025-01-13T21:21:23.854523117Z" level=info msg="StartContainer for \"fe86543a06d046cb96c54cf2c7f9c19df5ace497b7e009fa097fccb50a3517d6\" returns successfully" Jan 13 21:21:23.871205 containerd[1526]: time="2025-01-13T21:21:23.871111597Z" level=info msg="StartContainer for \"6011c7321704cf30a5edefc6c757625172cc183bbac1a85ed4cf4eaca134609e\" returns successfully" Jan 13 21:21:23.871351 containerd[1526]: time="2025-01-13T21:21:23.871213037Z" level=info msg="StartContainer for \"5dec655ad3c29aff30c4045b5cee231ae55a0d7ad3fdf2493b1aeae69c2f5e54\" returns successfully" Jan 13 21:21:23.916678 kubelet[2332]: W0113 21:21:23.913618 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:23.916678 kubelet[2332]: E0113 21:21:23.913704 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:23.981732 kubelet[2332]: W0113 21:21:23.981591 2332 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:23.981732 kubelet[2332]: E0113 21:21:23.981687 2332 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jan 13 21:21:24.074429 kubelet[2332]: E0113 21:21:24.074322 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="1.6s" Jan 13 21:21:24.179069 kubelet[2332]: I0113 21:21:24.178817 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:24.698836 kubelet[2332]: E0113 21:21:24.698810 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:24.702104 kubelet[2332]: E0113 21:21:24.701399 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:24.702928 kubelet[2332]: E0113 21:21:24.702815 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:25.715727 kubelet[2332]: E0113 21:21:25.715663 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:25.764038 kubelet[2332]: E0113 21:21:25.764003 2332 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:21:25.831905 kubelet[2332]: I0113 21:21:25.831853 2332 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:21:25.847990 kubelet[2332]: E0113 21:21:25.847850 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:21:25.948651 kubelet[2332]: E0113 21:21:25.948598 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:21:26.049227 kubelet[2332]: E0113 21:21:26.049118 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:21:26.149812 kubelet[2332]: E0113 21:21:26.149769 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:21:26.673872 kubelet[2332]: I0113 21:21:26.673744 2332 apiserver.go:52] "Watching apiserver" Jan 13 21:21:26.771180 kubelet[2332]: I0113 21:21:26.771146 2332 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:21:28.495051 systemd[1]: Reloading requested from client PID 2612 ('systemctl') (unit session-7.scope)... Jan 13 21:21:28.495065 systemd[1]: Reloading... Jan 13 21:21:28.551680 zram_generator::config[2657]: No configuration found. Jan 13 21:21:28.697403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:28.752745 systemd[1]: Reloading finished in 257 ms. Jan 13 21:21:28.785142 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:28.799429 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:21:28.799766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:28.807937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:28.896102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:28.902023 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:21:28.944595 kubelet[2703]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:28.944595 kubelet[2703]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:21:28.944595 kubelet[2703]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:28.944595 kubelet[2703]: I0113 21:21:28.943768 2703 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:21:28.948924 kubelet[2703]: I0113 21:21:28.948897 2703 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:21:28.948924 kubelet[2703]: I0113 21:21:28.948922 2703 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:21:28.949095 kubelet[2703]: I0113 21:21:28.949080 2703 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:21:28.950844 kubelet[2703]: I0113 21:21:28.950807 2703 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:21:28.953007 kubelet[2703]: I0113 21:21:28.952936 2703 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:21:28.959527 kubelet[2703]: I0113 21:21:28.959488 2703 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:21:28.960276 kubelet[2703]: I0113 21:21:28.959955 2703 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:21:28.960276 kubelet[2703]: I0113 21:21:28.960107 2703 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:21:28.960276 kubelet[2703]: I0113 21:21:28.960125 2703 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:21:28.960276 kubelet[2703]: I0113 21:21:28.960133 2703 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:21:28.960276 kubelet[2703]: I0113 21:21:28.960162 2703 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:28.960276 kubelet[2703]: I0113 21:21:28.960273 2703 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:21:28.960508 kubelet[2703]: I0113 21:21:28.960289 2703 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:21:28.960508 kubelet[2703]: I0113 21:21:28.960319 2703 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:21:28.960508 kubelet[2703]: I0113 21:21:28.960337 2703 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:21:28.963512 kubelet[2703]: I0113 21:21:28.963362 2703 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:21:28.966022 kubelet[2703]: I0113 21:21:28.965693 2703 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:21:28.966221 kubelet[2703]: I0113 21:21:28.966202 2703 server.go:1256] "Started kubelet" Jan 13 21:21:28.967542 kubelet[2703]: I0113 21:21:28.966889 2703 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:21:28.967542 kubelet[2703]: I0113 21:21:28.966959 2703 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:21:28.968290 kubelet[2703]: I0113 21:21:28.968251 2703 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:21:28.981171 kubelet[2703]: I0113 21:21:28.981021 2703 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:21:28.983884 kubelet[2703]: I0113 21:21:28.983707 2703 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:21:28.986420 kubelet[2703]: I0113 21:21:28.986327 2703 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:21:28.987373 kubelet[2703]: I0113 21:21:28.987103 2703 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:21:28.987373 kubelet[2703]: I0113 21:21:28.987242 2703 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:21:28.988231 kubelet[2703]: E0113 21:21:28.988202 2703 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:21:28.990911 kubelet[2703]: I0113 21:21:28.990883 2703 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:21:28.991028 kubelet[2703]: I0113 21:21:28.991004 2703 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:21:28.993565 kubelet[2703]: I0113 21:21:28.993547 2703 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:21:28.996918 kubelet[2703]: I0113 21:21:28.996229 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:21:28.999874 kubelet[2703]: I0113 21:21:28.999839 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:21:28.999874 kubelet[2703]: I0113 21:21:28.999868 2703 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:21:28.999974 kubelet[2703]: I0113 21:21:28.999891 2703 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:21:28.999974 kubelet[2703]: E0113 21:21:28.999941 2703 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:21:29.054172 kubelet[2703]: I0113 21:21:29.053724 2703 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:21:29.054172 kubelet[2703]: I0113 21:21:29.053753 2703 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:21:29.054172 kubelet[2703]: I0113 21:21:29.053772 2703 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:29.054172 kubelet[2703]: I0113 21:21:29.053913 2703 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:21:29.054172 kubelet[2703]: I0113 21:21:29.053945 2703 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:21:29.054172 kubelet[2703]: I0113 21:21:29.053953 2703 policy_none.go:49] "None policy: Start" Jan 13 21:21:29.056871 kubelet[2703]: I0113 21:21:29.056838 2703 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:21:29.056938 kubelet[2703]: I0113 21:21:29.056895 2703 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:21:29.057328 kubelet[2703]: I0113 21:21:29.057286 2703 state_mem.go:75] "Updated machine memory state" Jan 13 21:21:29.058791 kubelet[2703]: I0113 21:21:29.058752 2703 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:21:29.059580 kubelet[2703]: I0113 21:21:29.059281 2703 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:21:29.090474 kubelet[2703]: I0113 21:21:29.090446 2703 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:29.100363 kubelet[2703]: I0113 21:21:29.100297 2703 topology_manager.go:215] "Topology Admit Handler" podUID="55b8026a8260dd831b63b4f46a2330e8" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:21:29.100556 kubelet[2703]: I0113 21:21:29.100488 2703 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:21:29.100616 kubelet[2703]: I0113 21:21:29.100602 2703 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:21:29.101683 kubelet[2703]: I0113 21:21:29.100687 2703 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:21:29.101683 kubelet[2703]: I0113 21:21:29.100708 2703 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:21:29.188808 kubelet[2703]: I0113 21:21:29.188766 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55b8026a8260dd831b63b4f46a2330e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"55b8026a8260dd831b63b4f46a2330e8\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:29.189169 kubelet[2703]: I0113 21:21:29.188832 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:29.189169 kubelet[2703]: I0113 21:21:29.188857 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:29.189169 kubelet[2703]: I0113 21:21:29.188877 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55b8026a8260dd831b63b4f46a2330e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"55b8026a8260dd831b63b4f46a2330e8\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:29.189169 kubelet[2703]: I0113 21:21:29.188900 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55b8026a8260dd831b63b4f46a2330e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"55b8026a8260dd831b63b4f46a2330e8\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:29.189169 kubelet[2703]: I0113 21:21:29.188924 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:29.189326 kubelet[2703]: I0113 21:21:29.188981 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:29.189326 kubelet[2703]: I0113 21:21:29.189021 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:29.189326 kubelet[2703]: I0113 21:21:29.189049 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:21:29.411445 kubelet[2703]: E0113 21:21:29.410713 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:29.411445 kubelet[2703]: E0113 21:21:29.410791 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:29.411445 kubelet[2703]: E0113 21:21:29.410807 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:29.961035 kubelet[2703]: I0113 21:21:29.960995 2703 apiserver.go:52] "Watching apiserver" Jan 13 21:21:29.987727 kubelet[2703]: I0113 21:21:29.987668 2703 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:21:30.012616 kubelet[2703]: E0113 21:21:30.012577 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:30.021222 kubelet[2703]: E0113 21:21:30.019248 2703 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:30.021222 kubelet[2703]: E0113 21:21:30.019955 2703 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:21:30.021222 kubelet[2703]: E0113 21:21:30.020144 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:30.021222 kubelet[2703]: E0113 21:21:30.020195 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:30.031795 kubelet[2703]: I0113 21:21:30.031659 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.03160855 podStartE2EDuration="1.03160855s" podCreationTimestamp="2025-01-13 21:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:30.03158003 +0000 UTC m=+1.126367145" watchObservedRunningTime="2025-01-13 21:21:30.03160855 +0000 UTC m=+1.126395665" Jan 13 21:21:30.045692 kubelet[2703]: I0113 21:21:30.045593 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.045560478 podStartE2EDuration="1.045560478s" podCreationTimestamp="2025-01-13 21:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:30.03812833 +0000 UTC m=+1.132915445" watchObservedRunningTime="2025-01-13 21:21:30.045560478 +0000 UTC m=+1.140347593" Jan 13 21:21:30.053077 kubelet[2703]: I0113 21:21:30.052929 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.052900785 podStartE2EDuration="1.052900785s" podCreationTimestamp="2025-01-13 21:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:30.045664519 +0000 UTC m=+1.140451634" watchObservedRunningTime="2025-01-13 21:21:30.052900785 +0000 UTC m=+1.147687900" Jan 13 21:21:31.013797 kubelet[2703]: E0113 21:21:31.013697 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:31.014135 kubelet[2703]: E0113 21:21:31.013768 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:31.014135 kubelet[2703]: E0113 21:21:31.013926 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:33.055003 sudo[1725]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:33.057535 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:33.061617 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:44336.service: Deactivated successfully. Jan 13 21:21:33.063628 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:21:33.064349 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:21:33.065367 systemd-logind[1504]: Removed session 7. Jan 13 21:21:35.688571 kubelet[2703]: E0113 21:21:35.688528 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:36.020487 kubelet[2703]: E0113 21:21:36.020353 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:38.554318 update_engine[1510]: I20250113 21:21:38.554217 1510 update_attempter.cc:509] Updating boot flags... Jan 13 21:21:38.576673 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2803) Jan 13 21:21:38.609443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2802) Jan 13 21:21:39.894410 kubelet[2703]: E0113 21:21:39.894376 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:40.731135 kubelet[2703]: E0113 21:21:40.731014 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:42.387034 kubelet[2703]: I0113 21:21:42.387006 2703 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:21:42.398825 containerd[1526]: time="2025-01-13T21:21:42.398684530Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:21:42.399131 kubelet[2703]: I0113 21:21:42.398959 2703 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:21:43.297912 kubelet[2703]: I0113 21:21:43.297734 2703 topology_manager.go:215] "Topology Admit Handler" podUID="2dd3ec3e-422a-4390-a1c7-d52f2e46856f" podNamespace="kube-system" podName="kube-proxy-btmzb" Jan 13 21:21:43.383489 kubelet[2703]: I0113 21:21:43.383447 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2dd3ec3e-422a-4390-a1c7-d52f2e46856f-kube-proxy\") pod \"kube-proxy-btmzb\" (UID: \"2dd3ec3e-422a-4390-a1c7-d52f2e46856f\") " pod="kube-system/kube-proxy-btmzb" Jan 13 21:21:43.383489 kubelet[2703]: I0113 21:21:43.383495 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dd3ec3e-422a-4390-a1c7-d52f2e46856f-xtables-lock\") pod \"kube-proxy-btmzb\" (UID: \"2dd3ec3e-422a-4390-a1c7-d52f2e46856f\") " pod="kube-system/kube-proxy-btmzb" Jan 13 21:21:43.383668 kubelet[2703]: I0113 21:21:43.383516 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dd3ec3e-422a-4390-a1c7-d52f2e46856f-lib-modules\") pod \"kube-proxy-btmzb\" (UID: \"2dd3ec3e-422a-4390-a1c7-d52f2e46856f\") " pod="kube-system/kube-proxy-btmzb" Jan 13 21:21:43.383668 kubelet[2703]: I0113 21:21:43.383539 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6qs\" (UniqueName: \"kubernetes.io/projected/2dd3ec3e-422a-4390-a1c7-d52f2e46856f-kube-api-access-4l6qs\") pod \"kube-proxy-btmzb\" (UID: \"2dd3ec3e-422a-4390-a1c7-d52f2e46856f\") " pod="kube-system/kube-proxy-btmzb" Jan 13 21:21:43.512666 kubelet[2703]: I0113 21:21:43.510990 2703 topology_manager.go:215] "Topology Admit Handler" podUID="abf3836f-e33f-4125-9c04-690a5c5e666d" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-md7jk" Jan 13 21:21:43.586876 kubelet[2703]: I0113 21:21:43.586687 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgzg\" (UniqueName: \"kubernetes.io/projected/abf3836f-e33f-4125-9c04-690a5c5e666d-kube-api-access-nqgzg\") pod \"tigera-operator-c7ccbd65-md7jk\" (UID: \"abf3836f-e33f-4125-9c04-690a5c5e666d\") " pod="tigera-operator/tigera-operator-c7ccbd65-md7jk" Jan 13 21:21:43.586876 kubelet[2703]: I0113 21:21:43.586802 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/abf3836f-e33f-4125-9c04-690a5c5e666d-var-lib-calico\") pod \"tigera-operator-c7ccbd65-md7jk\" (UID: \"abf3836f-e33f-4125-9c04-690a5c5e666d\") " pod="tigera-operator/tigera-operator-c7ccbd65-md7jk" Jan 13 21:21:43.601375 kubelet[2703]: E0113 21:21:43.601083 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:43.603502 containerd[1526]: time="2025-01-13T21:21:43.603412207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btmzb,Uid:2dd3ec3e-422a-4390-a1c7-d52f2e46856f,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:43.626586 containerd[1526]: time="2025-01-13T21:21:43.626350098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:43.626586 containerd[1526]: time="2025-01-13T21:21:43.626405138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:43.626586 containerd[1526]: time="2025-01-13T21:21:43.626422178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:43.626586 containerd[1526]: time="2025-01-13T21:21:43.626505498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:43.660136 containerd[1526]: time="2025-01-13T21:21:43.660096551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btmzb,Uid:2dd3ec3e-422a-4390-a1c7-d52f2e46856f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bb383908d4d82400ff95625e7beba1a67a443fd791eecbd8cddd194a5fd43b5\"" Jan 13 21:21:43.662443 kubelet[2703]: E0113 21:21:43.662406 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:43.666795 containerd[1526]: time="2025-01-13T21:21:43.666729737Z" level=info msg="CreateContainer within sandbox \"8bb383908d4d82400ff95625e7beba1a67a443fd791eecbd8cddd194a5fd43b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:21:43.694503 containerd[1526]: time="2025-01-13T21:21:43.694463007Z" level=info msg="CreateContainer within sandbox \"8bb383908d4d82400ff95625e7beba1a67a443fd791eecbd8cddd194a5fd43b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e34d6365325d46c96d9b7e0b2f3c8f768f72cc4bce52bd08fa803b13244641d\"" Jan 13 21:21:43.698427 containerd[1526]: time="2025-01-13T21:21:43.698074701Z" level=info msg="StartContainer for \"0e34d6365325d46c96d9b7e0b2f3c8f768f72cc4bce52bd08fa803b13244641d\"" Jan 13 21:21:43.748947 containerd[1526]: time="2025-01-13T21:21:43.748906582Z" level=info msg="StartContainer for \"0e34d6365325d46c96d9b7e0b2f3c8f768f72cc4bce52bd08fa803b13244641d\" returns successfully" Jan 13 21:21:43.816524 containerd[1526]: time="2025-01-13T21:21:43.816479569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-md7jk,Uid:abf3836f-e33f-4125-9c04-690a5c5e666d,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:21:43.850711 containerd[1526]: time="2025-01-13T21:21:43.844849801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:43.850711 containerd[1526]: time="2025-01-13T21:21:43.844894281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:43.850711 containerd[1526]: time="2025-01-13T21:21:43.844905481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:43.850711 containerd[1526]: time="2025-01-13T21:21:43.844975641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:43.891654 containerd[1526]: time="2025-01-13T21:21:43.891597465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-md7jk,Uid:abf3836f-e33f-4125-9c04-690a5c5e666d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fabb63caa820b836fc6462106c76ed2f6267c3879035838ddcab100227f62889\"" Jan 13 21:21:43.893860 containerd[1526]: time="2025-01-13T21:21:43.893823034Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:21:44.036831 kubelet[2703]: E0113 21:21:44.036388 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:44.046987 kubelet[2703]: I0113 21:21:44.046858 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-btmzb" podStartSLOduration=1.046822227 podStartE2EDuration="1.046822227s" podCreationTimestamp="2025-01-13 21:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:44.046705987 +0000 UTC m=+15.141493102" watchObservedRunningTime="2025-01-13 21:21:44.046822227 +0000 UTC m=+15.141609342" Jan 13 21:21:48.536219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284033507.mount: Deactivated successfully. Jan 13 21:21:48.841700 containerd[1526]: time="2025-01-13T21:21:48.841614021Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:48.842328 containerd[1526]: time="2025-01-13T21:21:48.841994382Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125984" Jan 13 21:21:48.842857 containerd[1526]: time="2025-01-13T21:21:48.842831264Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:48.844851 containerd[1526]: time="2025-01-13T21:21:48.844820150Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:48.846714 containerd[1526]: time="2025-01-13T21:21:48.846684755Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.952824961s" Jan 13 21:21:48.846767 containerd[1526]: time="2025-01-13T21:21:48.846724435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 13 21:21:48.853027 containerd[1526]: time="2025-01-13T21:21:48.852995333Z" level=info msg="CreateContainer within sandbox \"fabb63caa820b836fc6462106c76ed2f6267c3879035838ddcab100227f62889\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:21:48.862008 containerd[1526]: time="2025-01-13T21:21:48.861970799Z" level=info msg="CreateContainer within sandbox \"fabb63caa820b836fc6462106c76ed2f6267c3879035838ddcab100227f62889\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b2cf0718a751f3662e4ae428b0f5a5804f0ff3c798854698721c457be2e14bef\"" Jan 13 21:21:48.863115 containerd[1526]: time="2025-01-13T21:21:48.863090762Z" level=info msg="StartContainer for \"b2cf0718a751f3662e4ae428b0f5a5804f0ff3c798854698721c457be2e14bef\"" Jan 13 21:21:48.912895 containerd[1526]: time="2025-01-13T21:21:48.912857945Z" level=info msg="StartContainer for \"b2cf0718a751f3662e4ae428b0f5a5804f0ff3c798854698721c457be2e14bef\" returns successfully" Jan 13 21:21:49.054389 kubelet[2703]: I0113 21:21:49.053709 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-md7jk" podStartSLOduration=1.096411563 podStartE2EDuration="6.053670698s" podCreationTimestamp="2025-01-13 21:21:43 +0000 UTC" firstStartedPulling="2025-01-13 21:21:43.892500189 +0000 UTC m=+14.987287304" lastFinishedPulling="2025-01-13 21:21:48.849759324 +0000 UTC m=+19.944546439" observedRunningTime="2025-01-13 21:21:49.053586098 +0000 UTC m=+20.148373173" watchObservedRunningTime="2025-01-13 21:21:49.053670698 +0000 UTC m=+20.148457813" Jan 13 21:21:52.591902 kubelet[2703]: I0113 21:21:52.591852 2703 topology_manager.go:215] "Topology Admit Handler" podUID="1c1ba40a-e374-46f1-880b-0d2b60a780bd" podNamespace="calico-system" podName="calico-typha-7f4d9d797-d4gdz" Jan 13 21:21:52.648687 kubelet[2703]: I0113 21:21:52.648624 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c1ba40a-e374-46f1-880b-0d2b60a780bd-tigera-ca-bundle\") pod \"calico-typha-7f4d9d797-d4gdz\" (UID: \"1c1ba40a-e374-46f1-880b-0d2b60a780bd\") " pod="calico-system/calico-typha-7f4d9d797-d4gdz" Jan 13 21:21:52.648687 kubelet[2703]: I0113 21:21:52.648699 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c1ba40a-e374-46f1-880b-0d2b60a780bd-typha-certs\") pod \"calico-typha-7f4d9d797-d4gdz\" (UID: \"1c1ba40a-e374-46f1-880b-0d2b60a780bd\") " pod="calico-system/calico-typha-7f4d9d797-d4gdz" Jan 13 21:21:52.648898 kubelet[2703]: I0113 21:21:52.648726 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddtp\" (UniqueName: \"kubernetes.io/projected/1c1ba40a-e374-46f1-880b-0d2b60a780bd-kube-api-access-xddtp\") pod \"calico-typha-7f4d9d797-d4gdz\" (UID: \"1c1ba40a-e374-46f1-880b-0d2b60a780bd\") " pod="calico-system/calico-typha-7f4d9d797-d4gdz" Jan 13 21:21:52.686691 kubelet[2703]: I0113 21:21:52.686652 2703 topology_manager.go:215] "Topology Admit Handler" podUID="7ce62281-b7d1-495e-ac1b-8b217fb61169" podNamespace="calico-system" podName="calico-node-6bthh" Jan 13 21:21:52.749288 kubelet[2703]: I0113 21:21:52.749236 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-policysync\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.749288 kubelet[2703]: I0113 21:21:52.749287 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-flexvol-driver-host\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.749456 kubelet[2703]: I0113 21:21:52.749325 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ce62281-b7d1-495e-ac1b-8b217fb61169-node-certs\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.749456 kubelet[2703]: I0113 21:21:52.749347 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-var-lib-calico\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.749456 kubelet[2703]: I0113 21:21:52.749366 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-cni-net-dir\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.749456 kubelet[2703]: I0113 21:21:52.749388 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-cni-bin-dir\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.749456 kubelet[2703]: I0113 21:21:52.749409 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-xtables-lock\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.750055 kubelet[2703]: I0113 21:21:52.749442 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce62281-b7d1-495e-ac1b-8b217fb61169-tigera-ca-bundle\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.750055 kubelet[2703]: I0113 21:21:52.749464 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nksr\" (UniqueName: \"kubernetes.io/projected/7ce62281-b7d1-495e-ac1b-8b217fb61169-kube-api-access-8nksr\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.750055 kubelet[2703]: I0113 21:21:52.749496 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-lib-modules\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.750055 kubelet[2703]: I0113 21:21:52.749519 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-var-run-calico\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.750055 kubelet[2703]: I0113 21:21:52.749545 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ce62281-b7d1-495e-ac1b-8b217fb61169-cni-log-dir\") pod \"calico-node-6bthh\" (UID: \"7ce62281-b7d1-495e-ac1b-8b217fb61169\") " pod="calico-system/calico-node-6bthh" Jan 13 21:21:52.794654 kubelet[2703]: I0113 21:21:52.794551 2703 topology_manager.go:215] "Topology Admit Handler" podUID="b673efd0-dcd2-4e1c-9b65-6e14b085060d" podNamespace="calico-system" podName="csi-node-driver-44f8k" Jan 13 21:21:52.809183 kubelet[2703]: E0113 21:21:52.809116 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44f8k" podUID="b673efd0-dcd2-4e1c-9b65-6e14b085060d" Jan 13 21:21:52.850935 kubelet[2703]: I0113 21:21:52.850685 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b673efd0-dcd2-4e1c-9b65-6e14b085060d-varrun\") pod \"csi-node-driver-44f8k\" (UID: \"b673efd0-dcd2-4e1c-9b65-6e14b085060d\") " pod="calico-system/csi-node-driver-44f8k" Jan 13 21:21:52.850935 kubelet[2703]: I0113 21:21:52.850753 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b673efd0-dcd2-4e1c-9b65-6e14b085060d-socket-dir\") pod \"csi-node-driver-44f8k\" (UID: \"b673efd0-dcd2-4e1c-9b65-6e14b085060d\") " pod="calico-system/csi-node-driver-44f8k" Jan 13 21:21:52.851081 kubelet[2703]: I0113 21:21:52.850939 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b673efd0-dcd2-4e1c-9b65-6e14b085060d-kubelet-dir\") pod \"csi-node-driver-44f8k\" (UID: \"b673efd0-dcd2-4e1c-9b65-6e14b085060d\") " pod="calico-system/csi-node-driver-44f8k" Jan 13 21:21:52.851081 kubelet[2703]: I0113 21:21:52.851011 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc5wg\" (UniqueName: \"kubernetes.io/projected/b673efd0-dcd2-4e1c-9b65-6e14b085060d-kube-api-access-pc5wg\") pod \"csi-node-driver-44f8k\" (UID: \"b673efd0-dcd2-4e1c-9b65-6e14b085060d\") " pod="calico-system/csi-node-driver-44f8k" Jan 13 21:21:52.853083 kubelet[2703]: E0113 21:21:52.853024 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.853083 kubelet[2703]: W0113 21:21:52.853057 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.853083 kubelet[2703]: E0113 21:21:52.853086 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.853505 kubelet[2703]: E0113 21:21:52.853460 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.853505 kubelet[2703]: W0113 21:21:52.853478 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.853684 kubelet[2703]: E0113 21:21:52.853613 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.853744 kubelet[2703]: E0113 21:21:52.853718 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.853744 kubelet[2703]: W0113 21:21:52.853730 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.853831 kubelet[2703]: E0113 21:21:52.853807 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.853932 kubelet[2703]: E0113 21:21:52.853916 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.853932 kubelet[2703]: W0113 21:21:52.853927 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.854067 kubelet[2703]: E0113 21:21:52.854006 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.854067 kubelet[2703]: I0113 21:21:52.854049 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b673efd0-dcd2-4e1c-9b65-6e14b085060d-registration-dir\") pod \"csi-node-driver-44f8k\" (UID: \"b673efd0-dcd2-4e1c-9b65-6e14b085060d\") " pod="calico-system/csi-node-driver-44f8k" Jan 13 21:21:52.854140 kubelet[2703]: E0113 21:21:52.854080 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.854140 kubelet[2703]: W0113 21:21:52.854088 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.855541 kubelet[2703]: E0113 21:21:52.855165 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.859649 kubelet[2703]: E0113 21:21:52.859153 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.859649 kubelet[2703]: W0113 21:21:52.859174 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.859649 kubelet[2703]: E0113 21:21:52.859198 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.859649 kubelet[2703]: E0113 21:21:52.859362 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.859649 kubelet[2703]: W0113 21:21:52.859370 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.859649 kubelet[2703]: E0113 21:21:52.859380 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.859649 kubelet[2703]: E0113 21:21:52.859504 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.859649 kubelet[2703]: W0113 21:21:52.859510 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.859649 kubelet[2703]: E0113 21:21:52.859520 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.862624 kubelet[2703]: E0113 21:21:52.860230 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.862624 kubelet[2703]: W0113 21:21:52.860245 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.862624 kubelet[2703]: E0113 21:21:52.860267 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.866004 kubelet[2703]: E0113 21:21:52.865978 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.866004 kubelet[2703]: W0113 21:21:52.865997 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.866113 kubelet[2703]: E0113 21:21:52.866103 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.913996 kubelet[2703]: E0113 21:21:52.913950 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:52.914622 containerd[1526]: time="2025-01-13T21:21:52.914559127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4d9d797-d4gdz,Uid:1c1ba40a-e374-46f1-880b-0d2b60a780bd,Namespace:calico-system,Attempt:0,}" Jan 13 21:21:52.938124 containerd[1526]: time="2025-01-13T21:21:52.937947299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:52.938915 containerd[1526]: time="2025-01-13T21:21:52.938136299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:52.938915 containerd[1526]: time="2025-01-13T21:21:52.938732900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:52.938915 containerd[1526]: time="2025-01-13T21:21:52.938837221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:52.957593 kubelet[2703]: E0113 21:21:52.957270 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.957593 kubelet[2703]: W0113 21:21:52.957294 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.957593 kubelet[2703]: E0113 21:21:52.957317 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.958049 kubelet[2703]: E0113 21:21:52.957881 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.958049 kubelet[2703]: W0113 21:21:52.957899 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.958049 kubelet[2703]: E0113 21:21:52.957923 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.959992 kubelet[2703]: E0113 21:21:52.959973 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.962794 kubelet[2703]: W0113 21:21:52.961041 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.962794 kubelet[2703]: E0113 21:21:52.961167 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.967711 kubelet[2703]: E0113 21:21:52.963723 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967711 kubelet[2703]: W0113 21:21:52.963748 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.967711 kubelet[2703]: E0113 21:21:52.963770 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.967711 kubelet[2703]: E0113 21:21:52.964099 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967711 kubelet[2703]: W0113 21:21:52.964112 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.967711 kubelet[2703]: E0113 21:21:52.964178 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.967711 kubelet[2703]: E0113 21:21:52.964503 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967711 kubelet[2703]: W0113 21:21:52.964513 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.967711 kubelet[2703]: E0113 21:21:52.964573 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.967711 kubelet[2703]: E0113 21:21:52.965085 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967923 kubelet[2703]: W0113 21:21:52.965096 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.967923 kubelet[2703]: E0113 21:21:52.965429 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.967923 kubelet[2703]: E0113 21:21:52.965681 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967923 kubelet[2703]: W0113 21:21:52.965694 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.967923 kubelet[2703]: E0113 21:21:52.965872 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967923 kubelet[2703]: W0113 21:21:52.965881 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.967923 kubelet[2703]: E0113 21:21:52.966017 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967923 kubelet[2703]: W0113 21:21:52.966025 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.967923 kubelet[2703]: E0113 21:21:52.966170 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.967923 kubelet[2703]: W0113 21:21:52.966178 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.968129 kubelet[2703]: E0113 21:21:52.966313 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.968129 kubelet[2703]: W0113 21:21:52.966322 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.968129 kubelet[2703]: E0113 21:21:52.966335 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.968129 kubelet[2703]: E0113 21:21:52.966476 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.968129 kubelet[2703]: W0113 21:21:52.966483 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.968129 kubelet[2703]: E0113 21:21:52.966496 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.968129 kubelet[2703]: E0113 21:21:52.966699 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.968129 kubelet[2703]: W0113 21:21:52.966709 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.968129 kubelet[2703]: E0113 21:21:52.966720 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.968129 kubelet[2703]: E0113 21:21:52.966748 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.968321 kubelet[2703]: E0113 21:21:52.966935 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.968321 kubelet[2703]: W0113 21:21:52.966944 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.968321 kubelet[2703]: E0113 21:21:52.966956 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.968321 kubelet[2703]: E0113 21:21:52.967095 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.968321 kubelet[2703]: W0113 21:21:52.967103 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.968321 kubelet[2703]: E0113 21:21:52.967113 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.968702 kubelet[2703]: E0113 21:21:52.968674 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.968702 kubelet[2703]: W0113 21:21:52.968690 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.968702 kubelet[2703]: E0113 21:21:52.968701 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.968848 kubelet[2703]: E0113 21:21:52.968809 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.969324 kubelet[2703]: E0113 21:21:52.969250 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.969324 kubelet[2703]: E0113 21:21:52.969279 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.969558 kubelet[2703]: E0113 21:21:52.969427 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.969558 kubelet[2703]: W0113 21:21:52.969465 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.969558 kubelet[2703]: E0113 21:21:52.969498 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.969946 kubelet[2703]: E0113 21:21:52.969839 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.969946 kubelet[2703]: W0113 21:21:52.969855 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.969946 kubelet[2703]: E0113 21:21:52.969874 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.970242 kubelet[2703]: E0113 21:21:52.970127 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.970242 kubelet[2703]: W0113 21:21:52.970141 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.970395 kubelet[2703]: E0113 21:21:52.970381 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.970482 kubelet[2703]: W0113 21:21:52.970472 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.970616 kubelet[2703]: E0113 21:21:52.970516 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.970723 kubelet[2703]: E0113 21:21:52.970436 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.971082 kubelet[2703]: E0113 21:21:52.970988 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.971082 kubelet[2703]: W0113 21:21:52.971001 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.971082 kubelet[2703]: E0113 21:21:52.971066 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.971272 kubelet[2703]: E0113 21:21:52.971249 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.971272 kubelet[2703]: W0113 21:21:52.971261 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.971491 kubelet[2703]: E0113 21:21:52.971469 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.971662 kubelet[2703]: E0113 21:21:52.971583 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.971780 kubelet[2703]: W0113 21:21:52.971721 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.971780 kubelet[2703]: E0113 21:21:52.971738 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.972328 kubelet[2703]: E0113 21:21:52.972273 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.972328 kubelet[2703]: W0113 21:21:52.972292 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.972328 kubelet[2703]: E0113 21:21:52.972305 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.984276 kubelet[2703]: E0113 21:21:52.984239 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:52.984276 kubelet[2703]: W0113 21:21:52.984259 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:52.984276 kubelet[2703]: E0113 21:21:52.984278 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:52.987979 containerd[1526]: time="2025-01-13T21:21:52.987942129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4d9d797-d4gdz,Uid:1c1ba40a-e374-46f1-880b-0d2b60a780bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"4db44c56bc666ba21b0f13702a40ef19a9b5d8a474ee6576e405714371a083cb\"" Jan 13 21:21:52.990517 kubelet[2703]: E0113 21:21:52.990498 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:52.992593 kubelet[2703]: E0113 21:21:52.992400 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:52.992670 containerd[1526]: time="2025-01-13T21:21:52.992504779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:21:52.993103 containerd[1526]: time="2025-01-13T21:21:52.992840940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6bthh,Uid:7ce62281-b7d1-495e-ac1b-8b217fb61169,Namespace:calico-system,Attempt:0,}" Jan 13 21:21:53.066989 containerd[1526]: time="2025-01-13T21:21:53.066911255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:53.067134 containerd[1526]: time="2025-01-13T21:21:53.066962775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:53.067134 containerd[1526]: time="2025-01-13T21:21:53.066980895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:53.067134 containerd[1526]: time="2025-01-13T21:21:53.067078695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:53.097167 containerd[1526]: time="2025-01-13T21:21:53.097128197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6bthh,Uid:7ce62281-b7d1-495e-ac1b-8b217fb61169,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e93198c41393c01fd0789184d035080f08b79c44ad28bf99bcf42db087df2f0\"" Jan 13 21:21:53.097988 kubelet[2703]: E0113 21:21:53.097964 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:54.214857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320428219.mount: Deactivated successfully. Jan 13 21:21:54.695690 containerd[1526]: time="2025-01-13T21:21:54.695590658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:54.696354 containerd[1526]: time="2025-01-13T21:21:54.696318540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 13 21:21:54.696966 containerd[1526]: time="2025-01-13T21:21:54.696945061Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:54.698915 containerd[1526]: time="2025-01-13T21:21:54.698879745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:54.699797 containerd[1526]: time="2025-01-13T21:21:54.699458146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.706922887s" Jan 13 21:21:54.699797 containerd[1526]: time="2025-01-13T21:21:54.699484786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 13 21:21:54.701224 containerd[1526]: time="2025-01-13T21:21:54.700853708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:21:54.707836 containerd[1526]: time="2025-01-13T21:21:54.706596800Z" level=info msg="CreateContainer within sandbox \"4db44c56bc666ba21b0f13702a40ef19a9b5d8a474ee6576e405714371a083cb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:21:54.719593 containerd[1526]: time="2025-01-13T21:21:54.719558025Z" level=info msg="CreateContainer within sandbox \"4db44c56bc666ba21b0f13702a40ef19a9b5d8a474ee6576e405714371a083cb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1b830a70c3ab7c08fac9a3128336ae190b56a442eecc5acdcf5306f2cbd990c5\"" Jan 13 21:21:54.720160 containerd[1526]: time="2025-01-13T21:21:54.720122146Z" level=info msg="StartContainer for \"1b830a70c3ab7c08fac9a3128336ae190b56a442eecc5acdcf5306f2cbd990c5\"" Jan 13 21:21:54.776896 containerd[1526]: time="2025-01-13T21:21:54.776843656Z" level=info msg="StartContainer for \"1b830a70c3ab7c08fac9a3128336ae190b56a442eecc5acdcf5306f2cbd990c5\" returns successfully" Jan 13 21:21:55.002628 kubelet[2703]: E0113 21:21:55.002126 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44f8k" podUID="b673efd0-dcd2-4e1c-9b65-6e14b085060d" Jan 13 21:21:55.061795 kubelet[2703]: E0113 21:21:55.061480 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:55.072558 kubelet[2703]: I0113 21:21:55.072286 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7f4d9d797-d4gdz" podStartSLOduration=1.364232373 podStartE2EDuration="3.072231381s" podCreationTimestamp="2025-01-13 21:21:52 +0000 UTC" firstStartedPulling="2025-01-13 21:21:52.992168099 +0000 UTC m=+24.086955214" lastFinishedPulling="2025-01-13 21:21:54.700167107 +0000 UTC m=+25.794954222" observedRunningTime="2025-01-13 21:21:55.072103341 +0000 UTC m=+26.166890456" watchObservedRunningTime="2025-01-13 21:21:55.072231381 +0000 UTC m=+26.167018496" Jan 13 21:21:55.157422 kubelet[2703]: E0113 21:21:55.157381 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.157422 kubelet[2703]: W0113 21:21:55.157404 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.157422 kubelet[2703]: E0113 21:21:55.157424 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.157671 kubelet[2703]: E0113 21:21:55.157615 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.157671 kubelet[2703]: W0113 21:21:55.157627 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.157671 kubelet[2703]: E0113 21:21:55.157659 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.157884 kubelet[2703]: E0113 21:21:55.157854 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.157884 kubelet[2703]: W0113 21:21:55.157869 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.157884 kubelet[2703]: E0113 21:21:55.157880 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.158048 kubelet[2703]: E0113 21:21:55.158025 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.158048 kubelet[2703]: W0113 21:21:55.158036 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.158048 kubelet[2703]: E0113 21:21:55.158048 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.158204 kubelet[2703]: E0113 21:21:55.158193 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.158204 kubelet[2703]: W0113 21:21:55.158203 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.158251 kubelet[2703]: E0113 21:21:55.158213 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.158519 kubelet[2703]: E0113 21:21:55.158475 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.158519 kubelet[2703]: W0113 21:21:55.158506 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.158519 kubelet[2703]: E0113 21:21:55.158520 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.158706 kubelet[2703]: E0113 21:21:55.158693 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.158706 kubelet[2703]: W0113 21:21:55.158704 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.158770 kubelet[2703]: E0113 21:21:55.158715 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.158881 kubelet[2703]: E0113 21:21:55.158868 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.158908 kubelet[2703]: W0113 21:21:55.158883 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.158908 kubelet[2703]: E0113 21:21:55.158895 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.159064 kubelet[2703]: E0113 21:21:55.159050 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.159064 kubelet[2703]: W0113 21:21:55.159063 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.159112 kubelet[2703]: E0113 21:21:55.159074 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.159223 kubelet[2703]: E0113 21:21:55.159213 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.159245 kubelet[2703]: W0113 21:21:55.159223 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.159245 kubelet[2703]: E0113 21:21:55.159233 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.159359 kubelet[2703]: E0113 21:21:55.159350 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.159359 kubelet[2703]: W0113 21:21:55.159359 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.159404 kubelet[2703]: E0113 21:21:55.159370 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.159503 kubelet[2703]: E0113 21:21:55.159494 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.159527 kubelet[2703]: W0113 21:21:55.159503 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.159527 kubelet[2703]: E0113 21:21:55.159519 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.159747 kubelet[2703]: E0113 21:21:55.159734 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.159775 kubelet[2703]: W0113 21:21:55.159748 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.159775 kubelet[2703]: E0113 21:21:55.159762 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.159926 kubelet[2703]: E0113 21:21:55.159915 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.159951 kubelet[2703]: W0113 21:21:55.159926 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.159951 kubelet[2703]: E0113 21:21:55.159937 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.160078 kubelet[2703]: E0113 21:21:55.160068 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.160078 kubelet[2703]: W0113 21:21:55.160077 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.160123 kubelet[2703]: E0113 21:21:55.160087 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.184507 kubelet[2703]: E0113 21:21:55.184473 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.184507 kubelet[2703]: W0113 21:21:55.184494 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.184507 kubelet[2703]: E0113 21:21:55.184511 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.184786 kubelet[2703]: E0113 21:21:55.184761 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.184786 kubelet[2703]: W0113 21:21:55.184775 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.184847 kubelet[2703]: E0113 21:21:55.184794 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.185031 kubelet[2703]: E0113 21:21:55.185000 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.185031 kubelet[2703]: W0113 21:21:55.185020 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.185099 kubelet[2703]: E0113 21:21:55.185037 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.185313 kubelet[2703]: E0113 21:21:55.185254 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.185313 kubelet[2703]: W0113 21:21:55.185267 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.185313 kubelet[2703]: E0113 21:21:55.185282 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.185449 kubelet[2703]: E0113 21:21:55.185435 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.185449 kubelet[2703]: W0113 21:21:55.185446 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.185502 kubelet[2703]: E0113 21:21:55.185460 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.185612 kubelet[2703]: E0113 21:21:55.185599 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.185612 kubelet[2703]: W0113 21:21:55.185610 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.185680 kubelet[2703]: E0113 21:21:55.185626 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.185831 kubelet[2703]: E0113 21:21:55.185817 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.185831 kubelet[2703]: W0113 21:21:55.185828 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.185898 kubelet[2703]: E0113 21:21:55.185858 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.185976 kubelet[2703]: E0113 21:21:55.185963 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.185976 kubelet[2703]: W0113 21:21:55.185973 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.186094 kubelet[2703]: E0113 21:21:55.186078 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.186121 kubelet[2703]: E0113 21:21:55.186109 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.186121 kubelet[2703]: W0113 21:21:55.186117 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.186166 kubelet[2703]: E0113 21:21:55.186132 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.186300 kubelet[2703]: E0113 21:21:55.186288 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.186325 kubelet[2703]: W0113 21:21:55.186300 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.186325 kubelet[2703]: E0113 21:21:55.186316 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.186473 kubelet[2703]: E0113 21:21:55.186461 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.186494 kubelet[2703]: W0113 21:21:55.186474 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.186494 kubelet[2703]: E0113 21:21:55.186485 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.186675 kubelet[2703]: E0113 21:21:55.186661 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.186675 kubelet[2703]: W0113 21:21:55.186674 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.186733 kubelet[2703]: E0113 21:21:55.186689 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.186899 kubelet[2703]: E0113 21:21:55.186880 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.186899 kubelet[2703]: W0113 21:21:55.186896 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.186957 kubelet[2703]: E0113 21:21:55.186915 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.187799 kubelet[2703]: E0113 21:21:55.187785 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.187799 kubelet[2703]: W0113 21:21:55.187798 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.187863 kubelet[2703]: E0113 21:21:55.187814 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.188009 kubelet[2703]: E0113 21:21:55.187994 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.188049 kubelet[2703]: W0113 21:21:55.188011 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.188779 kubelet[2703]: E0113 21:21:55.188727 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.188900 kubelet[2703]: E0113 21:21:55.188877 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.188900 kubelet[2703]: W0113 21:21:55.188892 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.188956 kubelet[2703]: E0113 21:21:55.188909 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.189738 kubelet[2703]: E0113 21:21:55.189712 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.189738 kubelet[2703]: W0113 21:21:55.189731 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.189807 kubelet[2703]: E0113 21:21:55.189747 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:55.190230 kubelet[2703]: E0113 21:21:55.190201 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:55.190230 kubelet[2703]: W0113 21:21:55.190216 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:55.190230 kubelet[2703]: E0113 21:21:55.190230 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.065448 kubelet[2703]: I0113 21:21:56.064453 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:21:56.065448 kubelet[2703]: E0113 21:21:56.065111 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:56.067178 kubelet[2703]: E0113 21:21:56.067073 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.067178 kubelet[2703]: W0113 21:21:56.067089 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.067178 kubelet[2703]: E0113 21:21:56.067108 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.067362 kubelet[2703]: E0113 21:21:56.067350 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.067418 kubelet[2703]: W0113 21:21:56.067408 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.067473 kubelet[2703]: E0113 21:21:56.067464 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.067792 kubelet[2703]: E0113 21:21:56.067696 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.067792 kubelet[2703]: W0113 21:21:56.067707 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.067792 kubelet[2703]: E0113 21:21:56.067720 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.067962 kubelet[2703]: E0113 21:21:56.067950 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.068023 kubelet[2703]: W0113 21:21:56.068012 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.068088 kubelet[2703]: E0113 21:21:56.068078 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.068371 kubelet[2703]: E0113 21:21:56.068356 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.068546 kubelet[2703]: W0113 21:21:56.068441 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.068546 kubelet[2703]: E0113 21:21:56.068461 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.068678 kubelet[2703]: E0113 21:21:56.068666 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.068738 kubelet[2703]: W0113 21:21:56.068727 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.068793 kubelet[2703]: E0113 21:21:56.068784 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.069101 kubelet[2703]: E0113 21:21:56.069085 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.069180 kubelet[2703]: W0113 21:21:56.069168 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.069301 kubelet[2703]: E0113 21:21:56.069222 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.069577 kubelet[2703]: E0113 21:21:56.069470 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.069577 kubelet[2703]: W0113 21:21:56.069482 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.069577 kubelet[2703]: E0113 21:21:56.069494 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.069852 kubelet[2703]: E0113 21:21:56.069794 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.069852 kubelet[2703]: W0113 21:21:56.069805 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.069852 kubelet[2703]: E0113 21:21:56.069819 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.070341 kubelet[2703]: E0113 21:21:56.070229 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.070341 kubelet[2703]: W0113 21:21:56.070242 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.070341 kubelet[2703]: E0113 21:21:56.070255 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.070509 kubelet[2703]: E0113 21:21:56.070497 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.070561 kubelet[2703]: W0113 21:21:56.070551 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.070610 kubelet[2703]: E0113 21:21:56.070602 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.070831 kubelet[2703]: E0113 21:21:56.070818 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.070983 kubelet[2703]: W0113 21:21:56.070894 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.070983 kubelet[2703]: E0113 21:21:56.070915 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.071125 kubelet[2703]: E0113 21:21:56.071113 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.071175 kubelet[2703]: W0113 21:21:56.071166 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.071225 kubelet[2703]: E0113 21:21:56.071216 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.072201 kubelet[2703]: E0113 21:21:56.072075 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.072201 kubelet[2703]: W0113 21:21:56.072092 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.072201 kubelet[2703]: E0113 21:21:56.072107 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.072397 kubelet[2703]: E0113 21:21:56.072384 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.072455 kubelet[2703]: W0113 21:21:56.072444 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.072510 kubelet[2703]: E0113 21:21:56.072500 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.091526 kubelet[2703]: E0113 21:21:56.091472 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.091526 kubelet[2703]: W0113 21:21:56.091490 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.091526 kubelet[2703]: E0113 21:21:56.091506 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.092447 kubelet[2703]: E0113 21:21:56.092274 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.092447 kubelet[2703]: W0113 21:21:56.092290 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.092447 kubelet[2703]: E0113 21:21:56.092366 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.092604 kubelet[2703]: E0113 21:21:56.092588 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.092653 kubelet[2703]: W0113 21:21:56.092604 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.092653 kubelet[2703]: E0113 21:21:56.092626 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.092894 kubelet[2703]: E0113 21:21:56.092878 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.092942 kubelet[2703]: W0113 21:21:56.092896 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.092942 kubelet[2703]: E0113 21:21:56.092916 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.093473 kubelet[2703]: E0113 21:21:56.093124 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.093473 kubelet[2703]: W0113 21:21:56.093136 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.093473 kubelet[2703]: E0113 21:21:56.093153 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.093651 kubelet[2703]: E0113 21:21:56.093624 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.093686 kubelet[2703]: W0113 21:21:56.093652 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.093686 kubelet[2703]: E0113 21:21:56.093672 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.093880 kubelet[2703]: E0113 21:21:56.093867 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.093880 kubelet[2703]: W0113 21:21:56.093880 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.094032 kubelet[2703]: E0113 21:21:56.093943 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.094032 kubelet[2703]: E0113 21:21:56.094020 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.094032 kubelet[2703]: W0113 21:21:56.094027 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.094168 kubelet[2703]: E0113 21:21:56.094136 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.094277 kubelet[2703]: E0113 21:21:56.094264 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.094311 kubelet[2703]: W0113 21:21:56.094278 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.094798 kubelet[2703]: E0113 21:21:56.094586 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.094798 kubelet[2703]: W0113 21:21:56.094603 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.094798 kubelet[2703]: E0113 21:21:56.094628 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.094798 kubelet[2703]: E0113 21:21:56.094690 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.095581 kubelet[2703]: E0113 21:21:56.095404 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.095581 kubelet[2703]: W0113 21:21:56.095416 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.095581 kubelet[2703]: E0113 21:21:56.095437 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.095747 kubelet[2703]: E0113 21:21:56.095609 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.095747 kubelet[2703]: W0113 21:21:56.095617 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.095747 kubelet[2703]: E0113 21:21:56.095630 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.096403 kubelet[2703]: E0113 21:21:56.096386 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.096403 kubelet[2703]: W0113 21:21:56.096401 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.096765 kubelet[2703]: E0113 21:21:56.096474 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.096885 kubelet[2703]: E0113 21:21:56.096855 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.096885 kubelet[2703]: W0113 21:21:56.096868 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.096954 kubelet[2703]: E0113 21:21:56.096915 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.097708 kubelet[2703]: E0113 21:21:56.097429 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.097708 kubelet[2703]: W0113 21:21:56.097444 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.097708 kubelet[2703]: E0113 21:21:56.097697 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.097708 kubelet[2703]: W0113 21:21:56.097711 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.097836 kubelet[2703]: E0113 21:21:56.097733 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.097936 kubelet[2703]: E0113 21:21:56.097894 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.098488 kubelet[2703]: E0113 21:21:56.098469 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.098488 kubelet[2703]: W0113 21:21:56.098485 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.098572 kubelet[2703]: E0113 21:21:56.098500 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.099431 kubelet[2703]: E0113 21:21:56.098962 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:21:56.099431 kubelet[2703]: W0113 21:21:56.098979 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:21:56.099431 kubelet[2703]: E0113 21:21:56.099004 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:21:56.262479 containerd[1526]: time="2025-01-13T21:21:56.261788157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:56.263140 containerd[1526]: time="2025-01-13T21:21:56.263105839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 13 21:21:56.264791 containerd[1526]: time="2025-01-13T21:21:56.264738882Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:56.268399 containerd[1526]: time="2025-01-13T21:21:56.267503247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:56.268399 containerd[1526]: time="2025-01-13T21:21:56.268271168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.56738214s" Jan 13 21:21:56.268399 containerd[1526]: time="2025-01-13T21:21:56.268298488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 21:21:56.270520 containerd[1526]: time="2025-01-13T21:21:56.270433612Z" level=info msg="CreateContainer within sandbox \"2e93198c41393c01fd0789184d035080f08b79c44ad28bf99bcf42db087df2f0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:21:56.297813 containerd[1526]: time="2025-01-13T21:21:56.297764018Z" level=info msg="CreateContainer within sandbox \"2e93198c41393c01fd0789184d035080f08b79c44ad28bf99bcf42db087df2f0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"910276998e724608a1bfd9c834963f4d1f1ea541a78b628ef09157e977e9ecb1\"" Jan 13 21:21:56.298286 containerd[1526]: time="2025-01-13T21:21:56.298259659Z" level=info msg="StartContainer for \"910276998e724608a1bfd9c834963f4d1f1ea541a78b628ef09157e977e9ecb1\"" Jan 13 21:21:56.327858 systemd[1]: run-containerd-runc-k8s.io-910276998e724608a1bfd9c834963f4d1f1ea541a78b628ef09157e977e9ecb1-runc.K6m13i.mount: Deactivated successfully. Jan 13 21:21:56.363017 containerd[1526]: time="2025-01-13T21:21:56.362965330Z" level=info msg="StartContainer for \"910276998e724608a1bfd9c834963f4d1f1ea541a78b628ef09157e977e9ecb1\" returns successfully" Jan 13 21:21:56.451595 containerd[1526]: time="2025-01-13T21:21:56.437452097Z" level=info msg="shim disconnected" id=910276998e724608a1bfd9c834963f4d1f1ea541a78b628ef09157e977e9ecb1 namespace=k8s.io Jan 13 21:21:56.451595 containerd[1526]: time="2025-01-13T21:21:56.451585361Z" level=warning msg="cleaning up after shim disconnected" id=910276998e724608a1bfd9c834963f4d1f1ea541a78b628ef09157e977e9ecb1 namespace=k8s.io Jan 13 21:21:56.451595 containerd[1526]: time="2025-01-13T21:21:56.451606161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:21:56.463846 containerd[1526]: time="2025-01-13T21:21:56.463710182Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:21:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:21:57.000534 kubelet[2703]: E0113 21:21:57.000488 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44f8k" podUID="b673efd0-dcd2-4e1c-9b65-6e14b085060d" Jan 13 21:21:57.071024 kubelet[2703]: E0113 21:21:57.069192 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:57.071691 containerd[1526]: time="2025-01-13T21:21:57.070819090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:21:57.279620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-910276998e724608a1bfd9c834963f4d1f1ea541a78b628ef09157e977e9ecb1-rootfs.mount: Deactivated successfully. Jan 13 21:21:59.000893 kubelet[2703]: E0113 21:21:59.000853 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44f8k" podUID="b673efd0-dcd2-4e1c-9b65-6e14b085060d" Jan 13 21:21:59.778026 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:38402.service - OpenSSH per-connection server daemon (10.0.0.1:38402). Jan 13 21:21:59.790045 containerd[1526]: time="2025-01-13T21:21:59.789339347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:59.790701 containerd[1526]: time="2025-01-13T21:21:59.790629549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 21:21:59.791819 containerd[1526]: time="2025-01-13T21:21:59.791776231Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:59.794281 containerd[1526]: time="2025-01-13T21:21:59.794213154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:59.795021 containerd[1526]: time="2025-01-13T21:21:59.794903035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.724049225s" Jan 13 21:21:59.795021 containerd[1526]: time="2025-01-13T21:21:59.794932715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 21:21:59.797890 containerd[1526]: time="2025-01-13T21:21:59.797861679Z" level=info msg="CreateContainer within sandbox \"2e93198c41393c01fd0789184d035080f08b79c44ad28bf99bcf42db087df2f0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:21:59.811514 sshd[3416]: Accepted publickey for core from 10.0.0.1 port 38402 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:21:59.812918 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:59.813416 containerd[1526]: time="2025-01-13T21:21:59.813320621Z" level=info msg="CreateContainer within sandbox \"2e93198c41393c01fd0789184d035080f08b79c44ad28bf99bcf42db087df2f0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b002d159830dc2a0c39041aa1bd81df263ab9380db99d11f3ad46ac857dd351a\"" Jan 13 21:21:59.816753 containerd[1526]: time="2025-01-13T21:21:59.813696222Z" level=info msg="StartContainer for \"b002d159830dc2a0c39041aa1bd81df263ab9380db99d11f3ad46ac857dd351a\"" Jan 13 21:21:59.819704 systemd-logind[1504]: New session 8 of user core. Jan 13 21:21:59.828928 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:21:59.863373 containerd[1526]: time="2025-01-13T21:21:59.863336811Z" level=info msg="StartContainer for \"b002d159830dc2a0c39041aa1bd81df263ab9380db99d11f3ad46ac857dd351a\" returns successfully" Jan 13 21:22:00.007191 sshd[3416]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:00.011210 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:22:00.012859 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:38402.service: Deactivated successfully. Jan 13 21:22:00.014447 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:22:00.016006 systemd-logind[1504]: Removed session 8. Jan 13 21:22:00.103653 kubelet[2703]: E0113 21:22:00.103557 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:00.456834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b002d159830dc2a0c39041aa1bd81df263ab9380db99d11f3ad46ac857dd351a-rootfs.mount: Deactivated successfully. Jan 13 21:22:00.462536 kubelet[2703]: I0113 21:22:00.462500 2703 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:22:00.473262 containerd[1526]: time="2025-01-13T21:22:00.473208908Z" level=info msg="shim disconnected" id=b002d159830dc2a0c39041aa1bd81df263ab9380db99d11f3ad46ac857dd351a namespace=k8s.io Jan 13 21:22:00.473603 containerd[1526]: time="2025-01-13T21:22:00.473446988Z" level=warning msg="cleaning up after shim disconnected" id=b002d159830dc2a0c39041aa1bd81df263ab9380db99d11f3ad46ac857dd351a namespace=k8s.io Jan 13 21:22:00.473603 containerd[1526]: time="2025-01-13T21:22:00.473465148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:00.577203 kubelet[2703]: I0113 21:22:00.577097 2703 topology_manager.go:215] "Topology Admit Handler" podUID="f7697a17-2ddc-4998-a5ff-c29dd1a74a22" podNamespace="kube-system" podName="coredns-76f75df574-gfwhd" Jan 13 21:22:00.668666 kubelet[2703]: I0113 21:22:00.666834 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7697a17-2ddc-4998-a5ff-c29dd1a74a22-config-volume\") pod \"coredns-76f75df574-gfwhd\" (UID: \"f7697a17-2ddc-4998-a5ff-c29dd1a74a22\") " pod="kube-system/coredns-76f75df574-gfwhd" Jan 13 21:22:00.668666 kubelet[2703]: I0113 21:22:00.666921 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjm7h\" (UniqueName: \"kubernetes.io/projected/f7697a17-2ddc-4998-a5ff-c29dd1a74a22-kube-api-access-tjm7h\") pod \"coredns-76f75df574-gfwhd\" (UID: \"f7697a17-2ddc-4998-a5ff-c29dd1a74a22\") " pod="kube-system/coredns-76f75df574-gfwhd" Jan 13 21:22:00.669584 kubelet[2703]: I0113 21:22:00.668979 2703 topology_manager.go:215] "Topology Admit Handler" podUID="89475a27-0916-4680-b302-fbf35e837e47" podNamespace="calico-apiserver" podName="calico-apiserver-8f84b7485-fpvqh" Jan 13 21:22:00.669584 kubelet[2703]: I0113 21:22:00.669136 2703 topology_manager.go:215] "Topology Admit Handler" podUID="d0305d93-9c64-4c59-b1c0-353d135c78a7" podNamespace="calico-apiserver" podName="calico-apiserver-8f84b7485-7wvv5" Jan 13 21:22:00.669584 kubelet[2703]: I0113 21:22:00.669233 2703 topology_manager.go:215] "Topology Admit Handler" podUID="5e45114b-d853-433a-9798-af8f1f159ae2" podNamespace="calico-system" podName="calico-kube-controllers-856b764fb4-jwzq6" Jan 13 21:22:00.669735 kubelet[2703]: I0113 21:22:00.669655 2703 topology_manager.go:215] "Topology Admit Handler" podUID="cc6bdfdd-cc3e-4a92-962e-c53a92f68c06" podNamespace="kube-system" podName="coredns-76f75df574-qhvr9" Jan 13 21:22:00.767759 kubelet[2703]: I0113 21:22:00.767626 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z499j\" (UniqueName: \"kubernetes.io/projected/5e45114b-d853-433a-9798-af8f1f159ae2-kube-api-access-z499j\") pod \"calico-kube-controllers-856b764fb4-jwzq6\" (UID: \"5e45114b-d853-433a-9798-af8f1f159ae2\") " pod="calico-system/calico-kube-controllers-856b764fb4-jwzq6" Jan 13 21:22:00.767759 kubelet[2703]: I0113 21:22:00.767737 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89475a27-0916-4680-b302-fbf35e837e47-calico-apiserver-certs\") pod \"calico-apiserver-8f84b7485-fpvqh\" (UID: \"89475a27-0916-4680-b302-fbf35e837e47\") " pod="calico-apiserver/calico-apiserver-8f84b7485-fpvqh" Jan 13 21:22:00.767907 kubelet[2703]: I0113 21:22:00.767791 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc6bdfdd-cc3e-4a92-962e-c53a92f68c06-config-volume\") pod \"coredns-76f75df574-qhvr9\" (UID: \"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06\") " pod="kube-system/coredns-76f75df574-qhvr9" Jan 13 21:22:00.767907 kubelet[2703]: I0113 21:22:00.767818 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0305d93-9c64-4c59-b1c0-353d135c78a7-calico-apiserver-certs\") pod \"calico-apiserver-8f84b7485-7wvv5\" (UID: \"d0305d93-9c64-4c59-b1c0-353d135c78a7\") " pod="calico-apiserver/calico-apiserver-8f84b7485-7wvv5" Jan 13 21:22:00.767907 kubelet[2703]: I0113 21:22:00.767841 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xchh\" (UniqueName: \"kubernetes.io/projected/d0305d93-9c64-4c59-b1c0-353d135c78a7-kube-api-access-8xchh\") pod \"calico-apiserver-8f84b7485-7wvv5\" (UID: \"d0305d93-9c64-4c59-b1c0-353d135c78a7\") " pod="calico-apiserver/calico-apiserver-8f84b7485-7wvv5" Jan 13 21:22:00.767907 kubelet[2703]: I0113 21:22:00.767867 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9np77\" (UniqueName: \"kubernetes.io/projected/89475a27-0916-4680-b302-fbf35e837e47-kube-api-access-9np77\") pod \"calico-apiserver-8f84b7485-fpvqh\" (UID: \"89475a27-0916-4680-b302-fbf35e837e47\") " pod="calico-apiserver/calico-apiserver-8f84b7485-fpvqh" Jan 13 21:22:00.767907 kubelet[2703]: I0113 21:22:00.767892 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e45114b-d853-433a-9798-af8f1f159ae2-tigera-ca-bundle\") pod \"calico-kube-controllers-856b764fb4-jwzq6\" (UID: \"5e45114b-d853-433a-9798-af8f1f159ae2\") " pod="calico-system/calico-kube-controllers-856b764fb4-jwzq6" Jan 13 21:22:00.768033 kubelet[2703]: I0113 21:22:00.767911 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjnc\" (UniqueName: \"kubernetes.io/projected/cc6bdfdd-cc3e-4a92-962e-c53a92f68c06-kube-api-access-dpjnc\") pod \"coredns-76f75df574-qhvr9\" (UID: \"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06\") " pod="kube-system/coredns-76f75df574-qhvr9" Jan 13 21:22:00.892255 kubelet[2703]: E0113 21:22:00.892208 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:00.894652 containerd[1526]: time="2025-01-13T21:22:00.892933261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfwhd,Uid:f7697a17-2ddc-4998-a5ff-c29dd1a74a22,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:00.973428 containerd[1526]: time="2025-01-13T21:22:00.973386487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856b764fb4-jwzq6,Uid:5e45114b-d853-433a-9798-af8f1f159ae2,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:00.974599 containerd[1526]: time="2025-01-13T21:22:00.974568129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-fpvqh,Uid:89475a27-0916-4680-b302-fbf35e837e47,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:22:00.976019 kubelet[2703]: E0113 21:22:00.975991 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:00.976519 containerd[1526]: time="2025-01-13T21:22:00.976285171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qhvr9,Uid:cc6bdfdd-cc3e-4a92-962e-c53a92f68c06,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:00.978071 containerd[1526]: time="2025-01-13T21:22:00.978041373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-7wvv5,Uid:d0305d93-9c64-4c59-b1c0-353d135c78a7,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:22:01.012258 containerd[1526]: time="2025-01-13T21:22:01.010299535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44f8k,Uid:b673efd0-dcd2-4e1c-9b65-6e14b085060d,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:01.114134 kubelet[2703]: E0113 21:22:01.112876 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:01.115919 containerd[1526]: time="2025-01-13T21:22:01.114828024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:22:01.200170 containerd[1526]: time="2025-01-13T21:22:01.199890729Z" level=error msg="Failed to destroy network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.200532 containerd[1526]: time="2025-01-13T21:22:01.200495690Z" level=error msg="encountered an error cleaning up failed sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.200577 containerd[1526]: time="2025-01-13T21:22:01.200555170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfwhd,Uid:f7697a17-2ddc-4998-a5ff-c29dd1a74a22,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.203956 containerd[1526]: time="2025-01-13T21:22:01.203911214Z" level=error msg="Failed to destroy network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.204782 kubelet[2703]: E0113 21:22:01.204679 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.204893 kubelet[2703]: E0113 21:22:01.204807 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gfwhd" Jan 13 21:22:01.204893 kubelet[2703]: E0113 21:22:01.204830 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gfwhd" Jan 13 21:22:01.204957 kubelet[2703]: E0113 21:22:01.204892 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gfwhd_kube-system(f7697a17-2ddc-4998-a5ff-c29dd1a74a22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gfwhd_kube-system(f7697a17-2ddc-4998-a5ff-c29dd1a74a22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gfwhd" podUID="f7697a17-2ddc-4998-a5ff-c29dd1a74a22" Jan 13 21:22:01.210407 containerd[1526]: time="2025-01-13T21:22:01.210354262Z" level=error msg="encountered an error cleaning up failed sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.210494 containerd[1526]: time="2025-01-13T21:22:01.210423382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-7wvv5,Uid:d0305d93-9c64-4c59-b1c0-353d135c78a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.211001 kubelet[2703]: E0113 21:22:01.210701 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.211001 kubelet[2703]: E0113 21:22:01.210748 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f84b7485-7wvv5" Jan 13 21:22:01.211001 kubelet[2703]: E0113 21:22:01.210767 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f84b7485-7wvv5" Jan 13 21:22:01.211144 kubelet[2703]: E0113 21:22:01.210814 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f84b7485-7wvv5_calico-apiserver(d0305d93-9c64-4c59-b1c0-353d135c78a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f84b7485-7wvv5_calico-apiserver(d0305d93-9c64-4c59-b1c0-353d135c78a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f84b7485-7wvv5" podUID="d0305d93-9c64-4c59-b1c0-353d135c78a7" Jan 13 21:22:01.211945 containerd[1526]: time="2025-01-13T21:22:01.211674664Z" level=error msg="Failed to destroy network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.212022 containerd[1526]: time="2025-01-13T21:22:01.211982704Z" level=error msg="encountered an error cleaning up failed sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.212054 containerd[1526]: time="2025-01-13T21:22:01.212021304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44f8k,Uid:b673efd0-dcd2-4e1c-9b65-6e14b085060d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.212306 kubelet[2703]: E0113 21:22:01.212172 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.212306 kubelet[2703]: E0113 21:22:01.212219 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44f8k" Jan 13 21:22:01.212306 kubelet[2703]: E0113 21:22:01.212237 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44f8k" Jan 13 21:22:01.212413 kubelet[2703]: E0113 21:22:01.212281 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-44f8k_calico-system(b673efd0-dcd2-4e1c-9b65-6e14b085060d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-44f8k_calico-system(b673efd0-dcd2-4e1c-9b65-6e14b085060d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-44f8k" podUID="b673efd0-dcd2-4e1c-9b65-6e14b085060d" Jan 13 21:22:01.215209 containerd[1526]: time="2025-01-13T21:22:01.215160148Z" level=error msg="Failed to destroy network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.215600 containerd[1526]: time="2025-01-13T21:22:01.215568909Z" level=error msg="encountered an error cleaning up failed sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.215741 containerd[1526]: time="2025-01-13T21:22:01.215713989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856b764fb4-jwzq6,Uid:5e45114b-d853-433a-9798-af8f1f159ae2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.216311 kubelet[2703]: E0113 21:22:01.216285 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.216371 kubelet[2703]: E0113 21:22:01.216339 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-856b764fb4-jwzq6" Jan 13 21:22:01.216371 kubelet[2703]: E0113 21:22:01.216362 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-856b764fb4-jwzq6" Jan 13 21:22:01.216426 kubelet[2703]: E0113 21:22:01.216401 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-856b764fb4-jwzq6_calico-system(5e45114b-d853-433a-9798-af8f1f159ae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-856b764fb4-jwzq6_calico-system(5e45114b-d853-433a-9798-af8f1f159ae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-856b764fb4-jwzq6" podUID="5e45114b-d853-433a-9798-af8f1f159ae2" Jan 13 21:22:01.218898 containerd[1526]: time="2025-01-13T21:22:01.218784713Z" level=error msg="Failed to destroy network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.220302 containerd[1526]: time="2025-01-13T21:22:01.220217954Z" level=error msg="encountered an error cleaning up failed sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.220302 containerd[1526]: time="2025-01-13T21:22:01.220264314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-fpvqh,Uid:89475a27-0916-4680-b302-fbf35e837e47,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.221198 containerd[1526]: time="2025-01-13T21:22:01.221110755Z" level=error msg="Failed to destroy network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.221716 containerd[1526]: time="2025-01-13T21:22:01.221568396Z" level=error msg="encountered an error cleaning up failed sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.221716 containerd[1526]: time="2025-01-13T21:22:01.221618596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qhvr9,Uid:cc6bdfdd-cc3e-4a92-962e-c53a92f68c06,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.221812 kubelet[2703]: E0113 21:22:01.221544 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.221812 kubelet[2703]: E0113 21:22:01.221585 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f84b7485-fpvqh" Jan 13 21:22:01.221812 kubelet[2703]: E0113 21:22:01.221610 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f84b7485-fpvqh" Jan 13 21:22:01.221891 kubelet[2703]: E0113 21:22:01.221660 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f84b7485-fpvqh_calico-apiserver(89475a27-0916-4680-b302-fbf35e837e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f84b7485-fpvqh_calico-apiserver(89475a27-0916-4680-b302-fbf35e837e47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f84b7485-fpvqh" podUID="89475a27-0916-4680-b302-fbf35e837e47" Jan 13 21:22:01.221891 kubelet[2703]: E0113 21:22:01.221830 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:01.221891 kubelet[2703]: E0113 21:22:01.221868 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qhvr9" Jan 13 21:22:01.222000 kubelet[2703]: E0113 21:22:01.221892 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qhvr9" Jan 13 21:22:01.222000 kubelet[2703]: E0113 21:22:01.221937 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qhvr9_kube-system(cc6bdfdd-cc3e-4a92-962e-c53a92f68c06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qhvr9_kube-system(cc6bdfdd-cc3e-4a92-962e-c53a92f68c06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qhvr9" podUID="cc6bdfdd-cc3e-4a92-962e-c53a92f68c06" Jan 13 21:22:02.114690 kubelet[2703]: I0113 21:22:02.114625 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:02.115964 containerd[1526]: time="2025-01-13T21:22:02.115590092Z" level=info msg="StopPodSandbox for \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\"" Jan 13 21:22:02.115964 containerd[1526]: time="2025-01-13T21:22:02.115775852Z" level=info msg="Ensure that sandbox 0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122 in task-service has been cleanup successfully" Jan 13 21:22:02.117346 kubelet[2703]: I0113 21:22:02.115898 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:02.117389 containerd[1526]: time="2025-01-13T21:22:02.116717014Z" level=info msg="StopPodSandbox for \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\"" Jan 13 21:22:02.117389 containerd[1526]: time="2025-01-13T21:22:02.116847734Z" level=info msg="Ensure that sandbox 6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312 in task-service has been cleanup successfully" Jan 13 21:22:02.119992 kubelet[2703]: I0113 21:22:02.119969 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:02.121358 containerd[1526]: time="2025-01-13T21:22:02.121240939Z" level=info msg="StopPodSandbox for \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\"" Jan 13 21:22:02.121612 containerd[1526]: time="2025-01-13T21:22:02.121578499Z" level=info msg="Ensure that sandbox 0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41 in task-service has been cleanup successfully" Jan 13 21:22:02.122569 kubelet[2703]: I0113 21:22:02.122200 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:02.124652 containerd[1526]: time="2025-01-13T21:22:02.124525223Z" level=info msg="StopPodSandbox for \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\"" Jan 13 21:22:02.124721 containerd[1526]: time="2025-01-13T21:22:02.124687503Z" level=info msg="Ensure that sandbox bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa in task-service has been cleanup successfully" Jan 13 21:22:02.128079 kubelet[2703]: I0113 21:22:02.128045 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:02.128697 containerd[1526]: time="2025-01-13T21:22:02.128620387Z" level=info msg="StopPodSandbox for \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\"" Jan 13 21:22:02.130563 containerd[1526]: time="2025-01-13T21:22:02.130164829Z" level=info msg="Ensure that sandbox 2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353 in task-service has been cleanup successfully" Jan 13 21:22:02.130791 kubelet[2703]: I0113 21:22:02.130769 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:02.131347 containerd[1526]: time="2025-01-13T21:22:02.131316470Z" level=info msg="StopPodSandbox for \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\"" Jan 13 21:22:02.131492 containerd[1526]: time="2025-01-13T21:22:02.131459911Z" level=info msg="Ensure that sandbox a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c in task-service has been cleanup successfully" Jan 13 21:22:02.165169 containerd[1526]: time="2025-01-13T21:22:02.164344589Z" level=error msg="StopPodSandbox for \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\" failed" error="failed to destroy network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:02.165291 kubelet[2703]: E0113 21:22:02.164797 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:02.165291 kubelet[2703]: E0113 21:22:02.164898 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41"} Jan 13 21:22:02.165291 kubelet[2703]: E0113 21:22:02.164953 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b673efd0-dcd2-4e1c-9b65-6e14b085060d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:02.165291 kubelet[2703]: E0113 21:22:02.164986 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b673efd0-dcd2-4e1c-9b65-6e14b085060d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-44f8k" podUID="b673efd0-dcd2-4e1c-9b65-6e14b085060d" Jan 13 21:22:02.168282 containerd[1526]: time="2025-01-13T21:22:02.168236073Z" level=error msg="StopPodSandbox for \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\" failed" error="failed to destroy network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:02.168500 kubelet[2703]: E0113 21:22:02.168460 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:02.168563 kubelet[2703]: E0113 21:22:02.168514 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122"} Jan 13 21:22:02.168563 kubelet[2703]: E0113 21:22:02.168550 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89475a27-0916-4680-b302-fbf35e837e47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:02.168660 kubelet[2703]: E0113 21:22:02.168577 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89475a27-0916-4680-b302-fbf35e837e47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f84b7485-fpvqh" podUID="89475a27-0916-4680-b302-fbf35e837e47" Jan 13 21:22:02.171816 containerd[1526]: time="2025-01-13T21:22:02.171773197Z" level=error msg="StopPodSandbox for \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\" failed" error="failed to destroy network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:02.172417 kubelet[2703]: E0113 21:22:02.172314 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:02.172489 kubelet[2703]: E0113 21:22:02.172425 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312"} Jan 13 21:22:02.172489 kubelet[2703]: E0113 21:22:02.172474 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e45114b-d853-433a-9798-af8f1f159ae2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:02.172647 kubelet[2703]: E0113 21:22:02.172623 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e45114b-d853-433a-9798-af8f1f159ae2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-856b764fb4-jwzq6" podUID="5e45114b-d853-433a-9798-af8f1f159ae2" Jan 13 21:22:02.175126 containerd[1526]: time="2025-01-13T21:22:02.175088641Z" level=error msg="StopPodSandbox for \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\" failed" error="failed to destroy network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:02.176073 kubelet[2703]: E0113 21:22:02.175847 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:02.176145 kubelet[2703]: E0113 21:22:02.176088 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c"} Jan 13 21:22:02.176169 kubelet[2703]: E0113 21:22:02.176145 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0305d93-9c64-4c59-b1c0-353d135c78a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:02.176213 kubelet[2703]: E0113 21:22:02.176174 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0305d93-9c64-4c59-b1c0-353d135c78a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f84b7485-7wvv5" podUID="d0305d93-9c64-4c59-b1c0-353d135c78a7" Jan 13 21:22:02.179953 containerd[1526]: time="2025-01-13T21:22:02.179882567Z" level=error msg="StopPodSandbox for \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\" failed" error="failed to destroy network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:02.180270 kubelet[2703]: E0113 21:22:02.180240 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:02.180319 kubelet[2703]: E0113 21:22:02.180278 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa"} Jan 13 21:22:02.180344 kubelet[2703]: E0113 21:22:02.180320 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:02.180388 kubelet[2703]: E0113 21:22:02.180346 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qhvr9" podUID="cc6bdfdd-cc3e-4a92-962e-c53a92f68c06" Jan 13 21:22:02.184627 containerd[1526]: time="2025-01-13T21:22:02.184578412Z" level=error msg="StopPodSandbox for \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\" failed" error="failed to destroy network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:02.184812 kubelet[2703]: E0113 21:22:02.184782 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:02.184856 kubelet[2703]: E0113 21:22:02.184817 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353"} Jan 13 21:22:02.184856 kubelet[2703]: E0113 21:22:02.184849 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7697a17-2ddc-4998-a5ff-c29dd1a74a22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:02.184925 kubelet[2703]: E0113 21:22:02.184874 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7697a17-2ddc-4998-a5ff-c29dd1a74a22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gfwhd" podUID="f7697a17-2ddc-4998-a5ff-c29dd1a74a22" Jan 13 21:22:05.019107 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:56644.service - OpenSSH per-connection server daemon (10.0.0.1:56644). Jan 13 21:22:05.071930 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 56644 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:05.073104 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:05.077956 systemd-logind[1504]: New session 9 of user core. Jan 13 21:22:05.091916 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:22:05.215616 sshd[3866]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:05.220068 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:56644.service: Deactivated successfully. Jan 13 21:22:05.223779 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:22:05.225868 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:22:05.227144 systemd-logind[1504]: Removed session 9. Jan 13 21:22:05.255321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686626792.mount: Deactivated successfully. Jan 13 21:22:05.448142 containerd[1526]: time="2025-01-13T21:22:05.448078450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:05.448711 containerd[1526]: time="2025-01-13T21:22:05.448605250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 21:22:05.449573 containerd[1526]: time="2025-01-13T21:22:05.449544691Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:05.451446 containerd[1526]: time="2025-01-13T21:22:05.451395613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:05.452298 containerd[1526]: time="2025-01-13T21:22:05.452081414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.33717715s" Jan 13 21:22:05.452298 containerd[1526]: time="2025-01-13T21:22:05.452115214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 21:22:05.463140 containerd[1526]: time="2025-01-13T21:22:05.463103424Z" level=info msg="CreateContainer within sandbox \"2e93198c41393c01fd0789184d035080f08b79c44ad28bf99bcf42db087df2f0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:22:05.480707 containerd[1526]: time="2025-01-13T21:22:05.480670161Z" level=info msg="CreateContainer within sandbox \"2e93198c41393c01fd0789184d035080f08b79c44ad28bf99bcf42db087df2f0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e2c7f7b7fbf7047b646e55e1a3cba9a24a963d7b6537c90a61d3cf90f8c756e1\"" Jan 13 21:22:05.481272 containerd[1526]: time="2025-01-13T21:22:05.481174242Z" level=info msg="StartContainer for \"e2c7f7b7fbf7047b646e55e1a3cba9a24a963d7b6537c90a61d3cf90f8c756e1\"" Jan 13 21:22:05.616242 containerd[1526]: time="2025-01-13T21:22:05.616118770Z" level=info msg="StartContainer for \"e2c7f7b7fbf7047b646e55e1a3cba9a24a963d7b6537c90a61d3cf90f8c756e1\" returns successfully" Jan 13 21:22:05.711281 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:22:05.711414 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:22:06.141832 kubelet[2703]: E0113 21:22:06.141761 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:06.162760 kubelet[2703]: I0113 21:22:06.162607 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-6bthh" podStartSLOduration=1.808802908 podStartE2EDuration="14.162552042s" podCreationTimestamp="2025-01-13 21:21:52 +0000 UTC" firstStartedPulling="2025-01-13 21:21:53.09856196 +0000 UTC m=+24.193349035" lastFinishedPulling="2025-01-13 21:22:05.452311054 +0000 UTC m=+36.547098169" observedRunningTime="2025-01-13 21:22:06.162043762 +0000 UTC m=+37.256830877" watchObservedRunningTime="2025-01-13 21:22:06.162552042 +0000 UTC m=+37.257339157" Jan 13 21:22:07.143897 kubelet[2703]: E0113 21:22:07.143087 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:07.369861 kubelet[2703]: I0113 21:22:07.369284 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:07.370008 kubelet[2703]: E0113 21:22:07.369973 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:08.145201 kubelet[2703]: E0113 21:22:08.145155 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:08.231675 kernel: bpftool[4157]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:22:08.373721 systemd-networkd[1226]: vxlan.calico: Link UP Jan 13 21:22:08.373727 systemd-networkd[1226]: vxlan.calico: Gained carrier Jan 13 21:22:10.230903 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:56646.service - OpenSSH per-connection server daemon (10.0.0.1:56646). Jan 13 21:22:10.271511 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 56646 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:10.273192 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:10.277345 systemd-logind[1504]: New session 10 of user core. Jan 13 21:22:10.289942 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:22:10.291749 systemd-networkd[1226]: vxlan.calico: Gained IPv6LL Jan 13 21:22:10.417992 sshd[4232]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:10.423895 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:56648.service - OpenSSH per-connection server daemon (10.0.0.1:56648). Jan 13 21:22:10.424721 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:56646.service: Deactivated successfully. Jan 13 21:22:10.428396 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:22:10.429295 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:22:10.431366 systemd-logind[1504]: Removed session 10. Jan 13 21:22:10.452077 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 56648 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:10.453300 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:10.457551 systemd-logind[1504]: New session 11 of user core. Jan 13 21:22:10.465904 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:22:10.636043 sshd[4246]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:10.647032 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:56658.service - OpenSSH per-connection server daemon (10.0.0.1:56658). Jan 13 21:22:10.647475 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:56648.service: Deactivated successfully. Jan 13 21:22:10.652681 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:22:10.657429 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:22:10.662770 systemd-logind[1504]: Removed session 11. Jan 13 21:22:10.682708 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 56658 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:10.683626 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:10.687531 systemd-logind[1504]: New session 12 of user core. Jan 13 21:22:10.693967 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:22:10.814231 sshd[4259]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:10.817616 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:22:10.818402 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:56658.service: Deactivated successfully. Jan 13 21:22:10.820165 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:22:10.820920 systemd-logind[1504]: Removed session 12. Jan 13 21:22:13.001655 containerd[1526]: time="2025-01-13T21:22:13.001324664Z" level=info msg="StopPodSandbox for \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\"" Jan 13 21:22:13.001655 containerd[1526]: time="2025-01-13T21:22:13.001399344Z" level=info msg="StopPodSandbox for \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\"" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.094 [INFO][4314] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.094 [INFO][4314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" iface="eth0" netns="/var/run/netns/cni-c9e87cd3-f9bd-faf1-23e6-ddbc28e76c39" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.095 [INFO][4314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" iface="eth0" netns="/var/run/netns/cni-c9e87cd3-f9bd-faf1-23e6-ddbc28e76c39" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.096 [INFO][4314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" iface="eth0" netns="/var/run/netns/cni-c9e87cd3-f9bd-faf1-23e6-ddbc28e76c39" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.096 [INFO][4314] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.096 [INFO][4314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.213 [INFO][4329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.214 [INFO][4329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.214 [INFO][4329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.227 [WARNING][4329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.227 [INFO][4329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.229 [INFO][4329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:13.234066 containerd[1526]: 2025-01-13 21:22:13.231 [INFO][4314] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:13.236976 containerd[1526]: time="2025-01-13T21:22:13.234829597Z" level=info msg="TearDown network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\" successfully" Jan 13 21:22:13.236976 containerd[1526]: time="2025-01-13T21:22:13.234868357Z" level=info msg="StopPodSandbox for \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\" returns successfully" Jan 13 21:22:13.237593 containerd[1526]: time="2025-01-13T21:22:13.237558239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44f8k,Uid:b673efd0-dcd2-4e1c-9b65-6e14b085060d,Namespace:calico-system,Attempt:1,}" Jan 13 21:22:13.237892 systemd[1]: run-netns-cni\x2dc9e87cd3\x2df9bd\x2dfaf1\x2d23e6\x2dddbc28e76c39.mount: Deactivated successfully. Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.105 [INFO][4315] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.105 [INFO][4315] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" iface="eth0" netns="/var/run/netns/cni-65de4b9f-3fac-8deb-c7af-2fa723c69a1a" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.106 [INFO][4315] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" iface="eth0" netns="/var/run/netns/cni-65de4b9f-3fac-8deb-c7af-2fa723c69a1a" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.106 [INFO][4315] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" iface="eth0" netns="/var/run/netns/cni-65de4b9f-3fac-8deb-c7af-2fa723c69a1a" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.106 [INFO][4315] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.106 [INFO][4315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.213 [INFO][4332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.214 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.229 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.242 [WARNING][4332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.243 [INFO][4332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.245 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:13.250042 containerd[1526]: 2025-01-13 21:22:13.247 [INFO][4315] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:13.251900 containerd[1526]: time="2025-01-13T21:22:13.251792367Z" level=info msg="TearDown network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\" successfully" Jan 13 21:22:13.251900 containerd[1526]: time="2025-01-13T21:22:13.251828327Z" level=info msg="StopPodSandbox for \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\" returns successfully" Jan 13 21:22:13.252363 systemd[1]: run-netns-cni\x2d65de4b9f\x2d3fac\x2d8deb\x2dc7af\x2d2fa723c69a1a.mount: Deactivated successfully. Jan 13 21:22:13.254704 containerd[1526]: time="2025-01-13T21:22:13.254449808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-fpvqh,Uid:89475a27-0916-4680-b302-fbf35e837e47,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:22:13.379211 systemd-networkd[1226]: calia5cb24fae09: Link UP Jan 13 21:22:13.379911 systemd-networkd[1226]: calia5cb24fae09: Gained carrier Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.297 [INFO][4348] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--44f8k-eth0 csi-node-driver- calico-system b673efd0-dcd2-4e1c-9b65-6e14b085060d 882 0 2025-01-13 21:21:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-44f8k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia5cb24fae09 [] []}} ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.297 [INFO][4348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.329 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" HandleID="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.341 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" HandleID="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027bea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-44f8k", "timestamp":"2025-01-13 21:22:13.329550651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.341 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.341 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.342 [INFO][4373] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.343 [INFO][4373] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.349 [INFO][4373] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.354 [INFO][4373] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.356 [INFO][4373] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.358 [INFO][4373] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.359 [INFO][4373] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.361 [INFO][4373] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.365 [INFO][4373] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.370 [INFO][4373] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.371 [INFO][4373] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" host="localhost" Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.371 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:13.413745 containerd[1526]: 2025-01-13 21:22:13.371 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" HandleID="k8s-pod-network.34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.415726 containerd[1526]: 2025-01-13 21:22:13.375 [INFO][4348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44f8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b673efd0-dcd2-4e1c-9b65-6e14b085060d", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-44f8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5cb24fae09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:13.415726 containerd[1526]: 2025-01-13 21:22:13.375 [INFO][4348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.415726 containerd[1526]: 2025-01-13 21:22:13.376 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5cb24fae09 ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.415726 containerd[1526]: 2025-01-13 21:22:13.378 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.415726 containerd[1526]: 2025-01-13 21:22:13.379 [INFO][4348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44f8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b673efd0-dcd2-4e1c-9b65-6e14b085060d", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f", Pod:"csi-node-driver-44f8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5cb24fae09", MAC:"5a:8a:fa:82:84:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:13.415726 containerd[1526]: 2025-01-13 21:22:13.410 [INFO][4348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f" Namespace="calico-system" Pod="csi-node-driver-44f8k" WorkloadEndpoint="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:13.437513 containerd[1526]: time="2025-01-13T21:22:13.437277112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:13.437513 containerd[1526]: time="2025-01-13T21:22:13.437342272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:13.437513 containerd[1526]: time="2025-01-13T21:22:13.437354752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:13.437513 containerd[1526]: time="2025-01-13T21:22:13.437461233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:13.441612 systemd-networkd[1226]: calic7324264aca: Link UP Jan 13 21:22:13.442400 systemd-networkd[1226]: calic7324264aca: Gained carrier Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.317 [INFO][4360] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0 calico-apiserver-8f84b7485- calico-apiserver 89475a27-0916-4680-b302-fbf35e837e47 883 0 2025-01-13 21:21:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f84b7485 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f84b7485-fpvqh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7324264aca [] []}} ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.317 [INFO][4360] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.351 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" HandleID="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.364 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" HandleID="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aafe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f84b7485-fpvqh", "timestamp":"2025-01-13 21:22:13.351906064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.364 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.371 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.371 [INFO][4379] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.373 [INFO][4379] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.381 [INFO][4379] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.411 [INFO][4379] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.415 [INFO][4379] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.419 [INFO][4379] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.419 [INFO][4379] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.421 [INFO][4379] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779 Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.428 [INFO][4379] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.435 [INFO][4379] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.435 [INFO][4379] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" host="localhost" Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.435 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:13.459147 containerd[1526]: 2025-01-13 21:22:13.435 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" HandleID="k8s-pod-network.486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.459729 containerd[1526]: 2025-01-13 21:22:13.438 [INFO][4360] cni-plugin/k8s.go 386: Populated endpoint ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"89475a27-0916-4680-b302-fbf35e837e47", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f84b7485-fpvqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7324264aca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:13.459729 containerd[1526]: 2025-01-13 21:22:13.438 [INFO][4360] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.459729 containerd[1526]: 2025-01-13 21:22:13.439 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7324264aca ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.459729 containerd[1526]: 2025-01-13 21:22:13.441 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.459729 containerd[1526]: 2025-01-13 21:22:13.442 [INFO][4360] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"89475a27-0916-4680-b302-fbf35e837e47", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779", Pod:"calico-apiserver-8f84b7485-fpvqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7324264aca", MAC:"1e:f6:d8:69:74:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:13.459729 containerd[1526]: 2025-01-13 21:22:13.456 [INFO][4360] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-fpvqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:13.465878 systemd-resolved[1430]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:13.481125 containerd[1526]: time="2025-01-13T21:22:13.480841177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:13.481550 containerd[1526]: time="2025-01-13T21:22:13.481100417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:13.481814 containerd[1526]: time="2025-01-13T21:22:13.481746898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:13.483446 containerd[1526]: time="2025-01-13T21:22:13.482235538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:13.497122 containerd[1526]: time="2025-01-13T21:22:13.497079187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44f8k,Uid:b673efd0-dcd2-4e1c-9b65-6e14b085060d,Namespace:calico-system,Attempt:1,} returns sandbox id \"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f\"" Jan 13 21:22:13.499810 containerd[1526]: time="2025-01-13T21:22:13.499529828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:22:13.510213 systemd-resolved[1430]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:13.529889 containerd[1526]: time="2025-01-13T21:22:13.529847285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-fpvqh,Uid:89475a27-0916-4680-b302-fbf35e837e47,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779\"" Jan 13 21:22:14.001048 containerd[1526]: time="2025-01-13T21:22:14.001005394Z" level=info msg="StopPodSandbox for \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\"" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.051 [INFO][4527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.051 [INFO][4527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" iface="eth0" netns="/var/run/netns/cni-28749e71-dc43-1417-b9b1-daf1e729e5f8" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.051 [INFO][4527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" iface="eth0" netns="/var/run/netns/cni-28749e71-dc43-1417-b9b1-daf1e729e5f8" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.052 [INFO][4527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" iface="eth0" netns="/var/run/netns/cni-28749e71-dc43-1417-b9b1-daf1e729e5f8" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.052 [INFO][4527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.052 [INFO][4527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.074 [INFO][4535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.074 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.074 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.083 [WARNING][4535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.083 [INFO][4535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.085 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:14.088204 containerd[1526]: 2025-01-13 21:22:14.086 [INFO][4527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:14.088958 containerd[1526]: time="2025-01-13T21:22:14.088354800Z" level=info msg="TearDown network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\" successfully" Jan 13 21:22:14.088958 containerd[1526]: time="2025-01-13T21:22:14.088382320Z" level=info msg="StopPodSandbox for \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\" returns successfully" Jan 13 21:22:14.089139 containerd[1526]: time="2025-01-13T21:22:14.089103481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856b764fb4-jwzq6,Uid:5e45114b-d853-433a-9798-af8f1f159ae2,Namespace:calico-system,Attempt:1,}" Jan 13 21:22:14.240091 systemd[1]: run-netns-cni\x2d28749e71\x2ddc43\x2d1417\x2db9b1\x2ddaf1e729e5f8.mount: Deactivated successfully. Jan 13 21:22:14.351327 systemd-networkd[1226]: calice738ad16a0: Link UP Jan 13 21:22:14.351580 systemd-networkd[1226]: calice738ad16a0: Gained carrier Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.277 [INFO][4543] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0 calico-kube-controllers-856b764fb4- calico-system 5e45114b-d853-433a-9798-af8f1f159ae2 896 0 2025-01-13 21:21:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:856b764fb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-856b764fb4-jwzq6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice738ad16a0 [] []}} ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.278 [INFO][4543] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.305 [INFO][4556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" HandleID="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.317 [INFO][4556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" HandleID="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5de0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-856b764fb4-jwzq6", "timestamp":"2025-01-13 21:22:14.305465436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.317 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.317 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.317 [INFO][4556] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.319 [INFO][4556] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.323 [INFO][4556] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.329 [INFO][4556] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.331 [INFO][4556] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.333 [INFO][4556] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.334 [INFO][4556] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.336 [INFO][4556] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242 Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.339 [INFO][4556] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.346 [INFO][4556] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.346 [INFO][4556] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" host="localhost" Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.346 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:14.368404 containerd[1526]: 2025-01-13 21:22:14.346 [INFO][4556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" HandleID="k8s-pod-network.3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.369071 containerd[1526]: 2025-01-13 21:22:14.348 [INFO][4543] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0", GenerateName:"calico-kube-controllers-856b764fb4-", Namespace:"calico-system", SelfLink:"", UID:"5e45114b-d853-433a-9798-af8f1f159ae2", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856b764fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-856b764fb4-jwzq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice738ad16a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:14.369071 containerd[1526]: 2025-01-13 21:22:14.348 [INFO][4543] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.369071 containerd[1526]: 2025-01-13 21:22:14.348 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice738ad16a0 ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.369071 containerd[1526]: 2025-01-13 21:22:14.350 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.369071 containerd[1526]: 2025-01-13 21:22:14.351 [INFO][4543] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0", GenerateName:"calico-kube-controllers-856b764fb4-", Namespace:"calico-system", SelfLink:"", UID:"5e45114b-d853-433a-9798-af8f1f159ae2", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856b764fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242", Pod:"calico-kube-controllers-856b764fb4-jwzq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice738ad16a0", MAC:"72:53:08:dc:5e:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:14.369071 containerd[1526]: 2025-01-13 21:22:14.364 [INFO][4543] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242" Namespace="calico-system" Pod="calico-kube-controllers-856b764fb4-jwzq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:14.388326 containerd[1526]: time="2025-01-13T21:22:14.388234960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:14.389150 containerd[1526]: time="2025-01-13T21:22:14.388932001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:14.389703 containerd[1526]: time="2025-01-13T21:22:14.389400641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:14.392406 containerd[1526]: time="2025-01-13T21:22:14.390683682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:14.415799 systemd-resolved[1430]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:14.433550 containerd[1526]: time="2025-01-13T21:22:14.433508865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856b764fb4-jwzq6,Uid:5e45114b-d853-433a-9798-af8f1f159ae2,Namespace:calico-system,Attempt:1,} returns sandbox id \"3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242\"" Jan 13 21:22:14.452864 systemd-networkd[1226]: calic7324264aca: Gained IPv6LL Jan 13 21:22:14.517931 containerd[1526]: time="2025-01-13T21:22:14.517883870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:14.518370 containerd[1526]: time="2025-01-13T21:22:14.518339070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 21:22:14.519330 containerd[1526]: time="2025-01-13T21:22:14.519295310Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:14.521738 containerd[1526]: time="2025-01-13T21:22:14.521700912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:14.522556 containerd[1526]: time="2025-01-13T21:22:14.522518112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.022944604s" Jan 13 21:22:14.522595 containerd[1526]: time="2025-01-13T21:22:14.522553352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 21:22:14.523806 containerd[1526]: time="2025-01-13T21:22:14.523772713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:22:14.524715 containerd[1526]: time="2025-01-13T21:22:14.524685513Z" level=info msg="CreateContainer within sandbox \"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:22:14.545102 containerd[1526]: time="2025-01-13T21:22:14.545047244Z" level=info msg="CreateContainer within sandbox \"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"431c3298bd94dfa8a3422b4e68e225881a05a9c6b459c0979564d2c6ab62e84d\"" Jan 13 21:22:14.545686 containerd[1526]: time="2025-01-13T21:22:14.545663885Z" level=info msg="StartContainer for \"431c3298bd94dfa8a3422b4e68e225881a05a9c6b459c0979564d2c6ab62e84d\"" Jan 13 21:22:14.602830 containerd[1526]: time="2025-01-13T21:22:14.602713995Z" level=info msg="StartContainer for \"431c3298bd94dfa8a3422b4e68e225881a05a9c6b459c0979564d2c6ab62e84d\" returns successfully" Jan 13 21:22:14.771770 systemd-networkd[1226]: calia5cb24fae09: Gained IPv6LL Jan 13 21:22:15.001894 containerd[1526]: time="2025-01-13T21:22:15.001756608Z" level=info msg="StopPodSandbox for \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\"" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.046 [INFO][4670] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.047 [INFO][4670] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" iface="eth0" netns="/var/run/netns/cni-d591e10c-4cb3-c060-85b6-f5c307c3f845" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.047 [INFO][4670] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" iface="eth0" netns="/var/run/netns/cni-d591e10c-4cb3-c060-85b6-f5c307c3f845" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.047 [INFO][4670] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" iface="eth0" netns="/var/run/netns/cni-d591e10c-4cb3-c060-85b6-f5c307c3f845" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.047 [INFO][4670] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.047 [INFO][4670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.067 [INFO][4677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.067 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.067 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.075 [WARNING][4677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.076 [INFO][4677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.077 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:15.082402 containerd[1526]: 2025-01-13 21:22:15.079 [INFO][4670] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:15.082892 containerd[1526]: time="2025-01-13T21:22:15.082579169Z" level=info msg="TearDown network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\" successfully" Jan 13 21:22:15.082892 containerd[1526]: time="2025-01-13T21:22:15.082608649Z" level=info msg="StopPodSandbox for \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\" returns successfully" Jan 13 21:22:15.084518 containerd[1526]: time="2025-01-13T21:22:15.084394570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-7wvv5,Uid:d0305d93-9c64-4c59-b1c0-353d135c78a7,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:22:15.205957 systemd-networkd[1226]: cali73165a3dbdb: Link UP Jan 13 21:22:15.206445 systemd-networkd[1226]: cali73165a3dbdb: Gained carrier Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.132 [INFO][4685] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0 calico-apiserver-8f84b7485- calico-apiserver d0305d93-9c64-4c59-b1c0-353d135c78a7 910 0 2025-01-13 21:21:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f84b7485 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f84b7485-7wvv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73165a3dbdb [] []}} ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.132 [INFO][4685] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.161 [INFO][4698] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" HandleID="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.172 [INFO][4698] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" HandleID="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f84b7485-7wvv5", "timestamp":"2025-01-13 21:22:15.161158088 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.172 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.172 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.172 [INFO][4698] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.174 [INFO][4698] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.178 [INFO][4698] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.183 [INFO][4698] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.186 [INFO][4698] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.188 [INFO][4698] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.188 [INFO][4698] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.190 [INFO][4698] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7 Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.194 [INFO][4698] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.200 [INFO][4698] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.200 [INFO][4698] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" host="localhost" Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.200 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:15.221852 containerd[1526]: 2025-01-13 21:22:15.200 [INFO][4698] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" HandleID="k8s-pod-network.3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.222614 containerd[1526]: 2025-01-13 21:22:15.203 [INFO][4685] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0305d93-9c64-4c59-b1c0-353d135c78a7", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f84b7485-7wvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73165a3dbdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:15.222614 containerd[1526]: 2025-01-13 21:22:15.203 [INFO][4685] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.222614 containerd[1526]: 2025-01-13 21:22:15.203 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73165a3dbdb ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.222614 containerd[1526]: 2025-01-13 21:22:15.206 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.222614 containerd[1526]: 2025-01-13 21:22:15.207 [INFO][4685] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0305d93-9c64-4c59-b1c0-353d135c78a7", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7", Pod:"calico-apiserver-8f84b7485-7wvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73165a3dbdb", MAC:"da:4c:d4:2e:2d:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:15.222614 containerd[1526]: 2025-01-13 21:22:15.219 [INFO][4685] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7" Namespace="calico-apiserver" Pod="calico-apiserver-8f84b7485-7wvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:15.241462 systemd[1]: run-netns-cni\x2dd591e10c\x2d4cb3\x2dc060\x2d85b6\x2df5c307c3f845.mount: Deactivated successfully. Jan 13 21:22:15.244008 containerd[1526]: time="2025-01-13T21:22:15.242985889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:15.244008 containerd[1526]: time="2025-01-13T21:22:15.243046329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:15.244008 containerd[1526]: time="2025-01-13T21:22:15.243066209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:15.244008 containerd[1526]: time="2025-01-13T21:22:15.243161569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:15.268167 systemd-resolved[1430]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:15.289562 containerd[1526]: time="2025-01-13T21:22:15.289518312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f84b7485-7wvv5,Uid:d0305d93-9c64-4c59-b1c0-353d135c78a7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7\"" Jan 13 21:22:15.539853 systemd-networkd[1226]: calice738ad16a0: Gained IPv6LL Jan 13 21:22:15.823961 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:53774.service - OpenSSH per-connection server daemon (10.0.0.1:53774). Jan 13 21:22:15.862236 sshd[4759]: Accepted publickey for core from 10.0.0.1 port 53774 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:15.864038 sshd[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:15.868002 systemd-logind[1504]: New session 13 of user core. Jan 13 21:22:15.875084 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:22:16.001835 containerd[1526]: time="2025-01-13T21:22:16.001217269Z" level=info msg="StopPodSandbox for \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\"" Jan 13 21:22:16.001835 containerd[1526]: time="2025-01-13T21:22:16.001429629Z" level=info msg="StopPodSandbox for \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\"" Jan 13 21:22:16.052081 sshd[4759]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:16.061085 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:53776.service - OpenSSH per-connection server daemon (10.0.0.1:53776). Jan 13 21:22:16.061669 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:53774.service: Deactivated successfully. Jan 13 21:22:16.064227 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:22:16.065127 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:22:16.066325 systemd-logind[1504]: Removed session 13. Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.073 [INFO][4801] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.073 [INFO][4801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" iface="eth0" netns="/var/run/netns/cni-5815df88-1c74-e2aa-9fc2-bc690801ab85" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.074 [INFO][4801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" iface="eth0" netns="/var/run/netns/cni-5815df88-1c74-e2aa-9fc2-bc690801ab85" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.074 [INFO][4801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" iface="eth0" netns="/var/run/netns/cni-5815df88-1c74-e2aa-9fc2-bc690801ab85" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.074 [INFO][4801] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.074 [INFO][4801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.099 [INFO][4822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.099 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.099 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.107 [WARNING][4822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.107 [INFO][4822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.111 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:16.114761 containerd[1526]: 2025-01-13 21:22:16.112 [INFO][4801] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:16.118155 containerd[1526]: time="2025-01-13T21:22:16.114922042Z" level=info msg="TearDown network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\" successfully" Jan 13 21:22:16.118155 containerd[1526]: time="2025-01-13T21:22:16.114962522Z" level=info msg="StopPodSandbox for \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\" returns successfully" Jan 13 21:22:16.118155 containerd[1526]: time="2025-01-13T21:22:16.117050523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qhvr9,Uid:cc6bdfdd-cc3e-4a92-962e-c53a92f68c06,Namespace:kube-system,Attempt:1,}" Jan 13 21:22:16.117186 systemd[1]: run-netns-cni\x2d5815df88\x2d1c74\x2de2aa\x2d9fc2\x2dbc690801ab85.mount: Deactivated successfully. Jan 13 21:22:16.118290 kubelet[2703]: E0113 21:22:16.115381 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:16.119510 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 53776 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:16.120967 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:16.133903 systemd-logind[1504]: New session 14 of user core. Jan 13 21:22:16.144137 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.085 [INFO][4797] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.085 [INFO][4797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" iface="eth0" netns="/var/run/netns/cni-3c0db46a-7ad1-690f-eeef-0ec2aa0fe91b" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.086 [INFO][4797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" iface="eth0" netns="/var/run/netns/cni-3c0db46a-7ad1-690f-eeef-0ec2aa0fe91b" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.086 [INFO][4797] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" iface="eth0" netns="/var/run/netns/cni-3c0db46a-7ad1-690f-eeef-0ec2aa0fe91b" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.086 [INFO][4797] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.086 [INFO][4797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.127 [INFO][4834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.128 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.129 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.148 [WARNING][4834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.148 [INFO][4834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.151 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:16.155119 containerd[1526]: 2025-01-13 21:22:16.153 [INFO][4797] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:16.155505 containerd[1526]: time="2025-01-13T21:22:16.155205501Z" level=info msg="TearDown network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\" successfully" Jan 13 21:22:16.155505 containerd[1526]: time="2025-01-13T21:22:16.155235901Z" level=info msg="StopPodSandbox for \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\" returns successfully" Jan 13 21:22:16.155884 kubelet[2703]: E0113 21:22:16.155577 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:16.156999 containerd[1526]: time="2025-01-13T21:22:16.156171461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfwhd,Uid:f7697a17-2ddc-4998-a5ff-c29dd1a74a22,Namespace:kube-system,Attempt:1,}" Jan 13 21:22:16.243160 systemd[1]: run-netns-cni\x2d3c0db46a\x2d7ad1\x2d690f\x2deeef\x2d0ec2aa0fe91b.mount: Deactivated successfully. Jan 13 21:22:16.337799 systemd-networkd[1226]: cali4752949b2b6: Link UP Jan 13 21:22:16.338356 systemd-networkd[1226]: cali4752949b2b6: Gained carrier Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.224 [INFO][4860] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--gfwhd-eth0 coredns-76f75df574- kube-system f7697a17-2ddc-4998-a5ff-c29dd1a74a22 923 0 2025-01-13 21:21:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-gfwhd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4752949b2b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.224 [INFO][4860] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.270 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" HandleID="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.291 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" HandleID="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321370), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-gfwhd", "timestamp":"2025-01-13 21:22:16.270104315 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.291 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.291 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.291 [INFO][4883] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.295 [INFO][4883] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.301 [INFO][4883] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.307 [INFO][4883] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.311 [INFO][4883] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.314 [INFO][4883] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.314 [INFO][4883] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.316 [INFO][4883] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95 Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.322 [INFO][4883] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.328 [INFO][4883] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.329 [INFO][4883] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" host="localhost" Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.329 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:16.357777 containerd[1526]: 2025-01-13 21:22:16.329 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" HandleID="k8s-pod-network.de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.358586 containerd[1526]: 2025-01-13 21:22:16.334 [INFO][4860] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gfwhd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7697a17-2ddc-4998-a5ff-c29dd1a74a22", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-gfwhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4752949b2b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:16.358586 containerd[1526]: 2025-01-13 21:22:16.335 [INFO][4860] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.358586 containerd[1526]: 2025-01-13 21:22:16.335 [INFO][4860] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4752949b2b6 ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.358586 containerd[1526]: 2025-01-13 21:22:16.338 [INFO][4860] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.358586 containerd[1526]: 2025-01-13 21:22:16.339 [INFO][4860] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gfwhd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7697a17-2ddc-4998-a5ff-c29dd1a74a22", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95", Pod:"coredns-76f75df574-gfwhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4752949b2b6", MAC:"f6:46:03:3f:d7:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:16.358586 containerd[1526]: 2025-01-13 21:22:16.354 [INFO][4860] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95" Namespace="kube-system" Pod="coredns-76f75df574-gfwhd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:16.395825 systemd-networkd[1226]: cali53c7caafa83: Link UP Jan 13 21:22:16.397929 systemd-networkd[1226]: cali53c7caafa83: Gained carrier Jan 13 21:22:16.417868 containerd[1526]: time="2025-01-13T21:22:16.417735624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:16.417868 containerd[1526]: time="2025-01-13T21:22:16.417800504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:16.417868 containerd[1526]: time="2025-01-13T21:22:16.417816544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:16.418392 containerd[1526]: time="2025-01-13T21:22:16.417909384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.207 [INFO][4853] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--qhvr9-eth0 coredns-76f75df574- kube-system cc6bdfdd-cc3e-4a92-962e-c53a92f68c06 922 0 2025-01-13 21:21:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-qhvr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53c7caafa83 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.208 [INFO][4853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.278 [INFO][4878] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" HandleID="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.294 [INFO][4878] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" HandleID="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-qhvr9", "timestamp":"2025-01-13 21:22:16.278817199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.294 [INFO][4878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.329 [INFO][4878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.329 [INFO][4878] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.332 [INFO][4878] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.340 [INFO][4878] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.349 [INFO][4878] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.354 [INFO][4878] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.358 [INFO][4878] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.358 [INFO][4878] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.361 [INFO][4878] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30 Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.369 [INFO][4878] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.386 [INFO][4878] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.386 [INFO][4878] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" host="localhost" Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.386 [INFO][4878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:16.425547 containerd[1526]: 2025-01-13 21:22:16.386 [INFO][4878] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" HandleID="k8s-pod-network.8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.426436 containerd[1526]: 2025-01-13 21:22:16.392 [INFO][4853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qhvr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-qhvr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53c7caafa83", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:16.426436 containerd[1526]: 2025-01-13 21:22:16.392 [INFO][4853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.426436 containerd[1526]: 2025-01-13 21:22:16.392 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53c7caafa83 ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.426436 containerd[1526]: 2025-01-13 21:22:16.395 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.426436 containerd[1526]: 2025-01-13 21:22:16.399 [INFO][4853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qhvr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30", Pod:"coredns-76f75df574-qhvr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53c7caafa83", MAC:"42:67:3e:8a:2d:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:16.426436 containerd[1526]: 2025-01-13 21:22:16.414 [INFO][4853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30" Namespace="kube-system" Pod="coredns-76f75df574-qhvr9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:16.469465 sshd[4818]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:16.473131 systemd-resolved[1430]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:16.479557 containerd[1526]: time="2025-01-13T21:22:16.479454653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:16.479557 containerd[1526]: time="2025-01-13T21:22:16.479509693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:16.479557 containerd[1526]: time="2025-01-13T21:22:16.479521013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:16.479864 containerd[1526]: time="2025-01-13T21:22:16.479604253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:16.482003 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:53786.service - OpenSSH per-connection server daemon (10.0.0.1:53786). Jan 13 21:22:16.487776 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:53776.service: Deactivated successfully. Jan 13 21:22:16.490220 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:22:16.493741 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:22:16.495305 systemd-logind[1504]: Removed session 14. Jan 13 21:22:16.513959 containerd[1526]: time="2025-01-13T21:22:16.513109749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfwhd,Uid:f7697a17-2ddc-4998-a5ff-c29dd1a74a22,Namespace:kube-system,Attempt:1,} returns sandbox id \"de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95\"" Jan 13 21:22:16.517361 kubelet[2703]: E0113 21:22:16.516106 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:16.521132 containerd[1526]: time="2025-01-13T21:22:16.521076793Z" level=info msg="CreateContainer within sandbox \"de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:22:16.529457 systemd-resolved[1430]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:16.549396 sshd[4976]: Accepted publickey for core from 10.0.0.1 port 53786 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:16.551458 sshd[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:16.554691 containerd[1526]: time="2025-01-13T21:22:16.554421368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qhvr9,Uid:cc6bdfdd-cc3e-4a92-962e-c53a92f68c06,Namespace:kube-system,Attempt:1,} returns sandbox id \"8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30\"" Jan 13 21:22:16.556875 kubelet[2703]: E0113 21:22:16.556843 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:16.557377 systemd-logind[1504]: New session 15 of user core. Jan 13 21:22:16.560897 containerd[1526]: time="2025-01-13T21:22:16.560135811Z" level=info msg="CreateContainer within sandbox \"8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:22:16.565065 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:22:16.588969 containerd[1526]: time="2025-01-13T21:22:16.588910865Z" level=info msg="CreateContainer within sandbox \"de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d16a17a51f5088f86d675bc95eb7532a050acb4b7097e9f71bc089eb63045c2a\"" Jan 13 21:22:16.589798 containerd[1526]: time="2025-01-13T21:22:16.589435865Z" level=info msg="StartContainer for \"d16a17a51f5088f86d675bc95eb7532a050acb4b7097e9f71bc089eb63045c2a\"" Jan 13 21:22:16.595394 containerd[1526]: time="2025-01-13T21:22:16.595348348Z" level=info msg="CreateContainer within sandbox \"8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f0de98e06c11f1d5ad414b2360c3428712c7aa7311edc3463606e96fc1c0463\"" Jan 13 21:22:16.596660 containerd[1526]: time="2025-01-13T21:22:16.596315588Z" level=info msg="StartContainer for \"9f0de98e06c11f1d5ad414b2360c3428712c7aa7311edc3463606e96fc1c0463\"" Jan 13 21:22:16.692462 containerd[1526]: time="2025-01-13T21:22:16.692216433Z" level=info msg="StartContainer for \"d16a17a51f5088f86d675bc95eb7532a050acb4b7097e9f71bc089eb63045c2a\" returns successfully" Jan 13 21:22:16.692462 containerd[1526]: time="2025-01-13T21:22:16.692223593Z" level=info msg="StartContainer for \"9f0de98e06c11f1d5ad414b2360c3428712c7aa7311edc3463606e96fc1c0463\" returns successfully" Jan 13 21:22:16.977875 containerd[1526]: time="2025-01-13T21:22:16.977735647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:16.979419 containerd[1526]: time="2025-01-13T21:22:16.979324168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 13 21:22:16.981361 containerd[1526]: time="2025-01-13T21:22:16.980310248Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:16.982582 containerd[1526]: time="2025-01-13T21:22:16.982540729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:16.983526 containerd[1526]: time="2025-01-13T21:22:16.983495410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.459680497s" Jan 13 21:22:16.983629 containerd[1526]: time="2025-01-13T21:22:16.983613130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 21:22:16.984286 containerd[1526]: time="2025-01-13T21:22:16.984230770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:22:16.986151 containerd[1526]: time="2025-01-13T21:22:16.986117931Z" level=info msg="CreateContainer within sandbox \"486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:22:16.993963 containerd[1526]: time="2025-01-13T21:22:16.993917895Z" level=info msg="CreateContainer within sandbox \"486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6678226c75649acd17c412700d0496967a6f9f50c848ef80d7d8c8e50a097cc0\"" Jan 13 21:22:16.995355 containerd[1526]: time="2025-01-13T21:22:16.994790535Z" level=info msg="StartContainer for \"6678226c75649acd17c412700d0496967a6f9f50c848ef80d7d8c8e50a097cc0\"" Jan 13 21:22:17.011811 systemd-networkd[1226]: cali73165a3dbdb: Gained IPv6LL Jan 13 21:22:17.064767 containerd[1526]: time="2025-01-13T21:22:17.064604526Z" level=info msg="StartContainer for \"6678226c75649acd17c412700d0496967a6f9f50c848ef80d7d8c8e50a097cc0\" returns successfully" Jan 13 21:22:17.196682 kubelet[2703]: E0113 21:22:17.194855 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:17.205675 kubelet[2703]: E0113 21:22:17.204444 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:17.245981 systemd[1]: run-containerd-runc-k8s.io-8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30-runc.ZIPR9l.mount: Deactivated successfully. Jan 13 21:22:17.265804 kubelet[2703]: I0113 21:22:17.263209 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f84b7485-fpvqh" podStartSLOduration=20.810273449 podStartE2EDuration="24.263161693s" podCreationTimestamp="2025-01-13 21:21:53 +0000 UTC" firstStartedPulling="2025-01-13 21:22:13.531149246 +0000 UTC m=+44.625936361" lastFinishedPulling="2025-01-13 21:22:16.98403749 +0000 UTC m=+48.078824605" observedRunningTime="2025-01-13 21:22:17.262912373 +0000 UTC m=+48.357699488" watchObservedRunningTime="2025-01-13 21:22:17.263161693 +0000 UTC m=+48.357948808" Jan 13 21:22:17.309132 kubelet[2703]: I0113 21:22:17.308525 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qhvr9" podStartSLOduration=34.308128753 podStartE2EDuration="34.308128753s" podCreationTimestamp="2025-01-13 21:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:17.307998313 +0000 UTC m=+48.402785428" watchObservedRunningTime="2025-01-13 21:22:17.308128753 +0000 UTC m=+48.402915908" Jan 13 21:22:17.310979 kubelet[2703]: I0113 21:22:17.310941 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gfwhd" podStartSLOduration=34.310897354 podStartE2EDuration="34.310897354s" podCreationTimestamp="2025-01-13 21:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:17.282876462 +0000 UTC m=+48.377663577" watchObservedRunningTime="2025-01-13 21:22:17.310897354 +0000 UTC m=+48.405684429" Jan 13 21:22:17.523758 systemd-networkd[1226]: cali4752949b2b6: Gained IPv6LL Jan 13 21:22:18.163765 systemd-networkd[1226]: cali53c7caafa83: Gained IPv6LL Jan 13 21:22:18.207529 kubelet[2703]: I0113 21:22:18.207493 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:18.208454 kubelet[2703]: E0113 21:22:18.208428 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:18.210212 kubelet[2703]: E0113 21:22:18.210026 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:18.325038 sshd[4976]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:18.334965 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:53790.service - OpenSSH per-connection server daemon (10.0.0.1:53790). Jan 13 21:22:18.348270 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:53786.service: Deactivated successfully. Jan 13 21:22:18.351219 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:22:18.351350 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:22:18.353576 systemd-logind[1504]: Removed session 15. Jan 13 21:22:18.382938 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 53790 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:18.383961 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:18.389243 systemd-logind[1504]: New session 16 of user core. Jan 13 21:22:18.395538 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:22:18.725835 containerd[1526]: time="2025-01-13T21:22:18.725788197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:18.727258 containerd[1526]: time="2025-01-13T21:22:18.727213398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 13 21:22:18.727938 containerd[1526]: time="2025-01-13T21:22:18.727915758Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:18.730113 containerd[1526]: time="2025-01-13T21:22:18.730079679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:18.730970 containerd[1526]: time="2025-01-13T21:22:18.730933119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.746664789s" Jan 13 21:22:18.730970 containerd[1526]: time="2025-01-13T21:22:18.730969399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 13 21:22:18.731889 containerd[1526]: time="2025-01-13T21:22:18.731706000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:22:18.745955 containerd[1526]: time="2025-01-13T21:22:18.745393485Z" level=info msg="CreateContainer within sandbox \"3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:22:18.763675 containerd[1526]: time="2025-01-13T21:22:18.763396133Z" level=info msg="CreateContainer within sandbox \"3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a47735cb7bda8cd4792b99873171747a36a0e76a3893f3402791c93b5575aed3\"" Jan 13 21:22:18.764459 containerd[1526]: time="2025-01-13T21:22:18.764434653Z" level=info msg="StartContainer for \"a47735cb7bda8cd4792b99873171747a36a0e76a3893f3402791c93b5575aed3\"" Jan 13 21:22:18.816110 sshd[5165]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:18.826189 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:53796.service - OpenSSH per-connection server daemon (10.0.0.1:53796). Jan 13 21:22:18.826563 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:53790.service: Deactivated successfully. Jan 13 21:22:18.830286 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:22:18.830393 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:22:18.833555 systemd-logind[1504]: Removed session 16. Jan 13 21:22:18.856563 containerd[1526]: time="2025-01-13T21:22:18.856513971Z" level=info msg="StartContainer for \"a47735cb7bda8cd4792b99873171747a36a0e76a3893f3402791c93b5575aed3\" returns successfully" Jan 13 21:22:18.877048 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 53796 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:18.878463 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:18.883622 systemd-logind[1504]: New session 17 of user core. Jan 13 21:22:18.887862 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:22:19.049067 sshd[5210]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:19.053906 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:53796.service: Deactivated successfully. Jan 13 21:22:19.057023 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:22:19.057797 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:22:19.059083 systemd-logind[1504]: Removed session 17. Jan 13 21:22:19.210719 kubelet[2703]: E0113 21:22:19.210677 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:19.211613 kubelet[2703]: E0113 21:22:19.211497 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:19.224779 kubelet[2703]: I0113 21:22:19.224743 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-856b764fb4-jwzq6" podStartSLOduration=22.928297103 podStartE2EDuration="27.224700517s" podCreationTimestamp="2025-01-13 21:21:52 +0000 UTC" firstStartedPulling="2025-01-13 21:22:14.434867385 +0000 UTC m=+45.529654500" lastFinishedPulling="2025-01-13 21:22:18.731270839 +0000 UTC m=+49.826057914" observedRunningTime="2025-01-13 21:22:19.222916157 +0000 UTC m=+50.317703312" watchObservedRunningTime="2025-01-13 21:22:19.224700517 +0000 UTC m=+50.319487632" Jan 13 21:22:20.125907 containerd[1526]: time="2025-01-13T21:22:20.125826623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:20.127197 containerd[1526]: time="2025-01-13T21:22:20.127155983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 21:22:20.128102 containerd[1526]: time="2025-01-13T21:22:20.128061824Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:20.130572 containerd[1526]: time="2025-01-13T21:22:20.130536504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:20.131502 containerd[1526]: time="2025-01-13T21:22:20.131464225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.399723785s" Jan 13 21:22:20.131538 containerd[1526]: time="2025-01-13T21:22:20.131504945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 21:22:20.132589 containerd[1526]: time="2025-01-13T21:22:20.132556785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:22:20.133271 containerd[1526]: time="2025-01-13T21:22:20.133234705Z" level=info msg="CreateContainer within sandbox \"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:22:20.149416 containerd[1526]: time="2025-01-13T21:22:20.149366631Z" level=info msg="CreateContainer within sandbox \"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"09eac4e2974900d2a97da03d6863b8d036b0bbfb79ecaa7b9e8a1c08b0b2aba7\"" Jan 13 21:22:20.150266 containerd[1526]: time="2025-01-13T21:22:20.150188912Z" level=info msg="StartContainer for \"09eac4e2974900d2a97da03d6863b8d036b0bbfb79ecaa7b9e8a1c08b0b2aba7\"" Jan 13 21:22:20.200846 containerd[1526]: time="2025-01-13T21:22:20.200796850Z" level=info msg="StartContainer for \"09eac4e2974900d2a97da03d6863b8d036b0bbfb79ecaa7b9e8a1c08b0b2aba7\" returns successfully" Jan 13 21:22:20.232894 kubelet[2703]: I0113 21:22:20.232859 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-44f8k" podStartSLOduration=21.599697984 podStartE2EDuration="28.232819942s" podCreationTimestamp="2025-01-13 21:21:52 +0000 UTC" firstStartedPulling="2025-01-13 21:22:13.498806987 +0000 UTC m=+44.593594062" lastFinishedPulling="2025-01-13 21:22:20.131928945 +0000 UTC m=+51.226716020" observedRunningTime="2025-01-13 21:22:20.230645861 +0000 UTC m=+51.325432936" watchObservedRunningTime="2025-01-13 21:22:20.232819942 +0000 UTC m=+51.327607057" Jan 13 21:22:20.391094 containerd[1526]: time="2025-01-13T21:22:20.390965719Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:20.392773 containerd[1526]: time="2025-01-13T21:22:20.392723760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:22:20.394589 containerd[1526]: time="2025-01-13T21:22:20.394543760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 261.947175ms" Jan 13 21:22:20.394589 containerd[1526]: time="2025-01-13T21:22:20.394596840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 21:22:20.399383 containerd[1526]: time="2025-01-13T21:22:20.398666962Z" level=info msg="CreateContainer within sandbox \"3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:22:20.408603 containerd[1526]: time="2025-01-13T21:22:20.408550085Z" level=info msg="CreateContainer within sandbox \"3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b5ad169aeb1f6f0dc3460598266f4898ed126c178e9ae9706c17cc6cc2ea8a5\"" Jan 13 21:22:20.409315 containerd[1526]: time="2025-01-13T21:22:20.409281726Z" level=info msg="StartContainer for \"5b5ad169aeb1f6f0dc3460598266f4898ed126c178e9ae9706c17cc6cc2ea8a5\"" Jan 13 21:22:20.480413 containerd[1526]: time="2025-01-13T21:22:20.480361191Z" level=info msg="StartContainer for \"5b5ad169aeb1f6f0dc3460598266f4898ed126c178e9ae9706c17cc6cc2ea8a5\" returns successfully" Jan 13 21:22:21.105284 kubelet[2703]: I0113 21:22:21.105237 2703 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:22:21.108312 kubelet[2703]: I0113 21:22:21.108287 2703 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:22:21.236219 kubelet[2703]: I0113 21:22:21.236187 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f84b7485-7wvv5" podStartSLOduration=23.131426692 podStartE2EDuration="28.23608026s" podCreationTimestamp="2025-01-13 21:21:53 +0000 UTC" firstStartedPulling="2025-01-13 21:22:15.290966353 +0000 UTC m=+46.385753468" lastFinishedPulling="2025-01-13 21:22:20.395619961 +0000 UTC m=+51.490407036" observedRunningTime="2025-01-13 21:22:21.2347797 +0000 UTC m=+52.329566895" watchObservedRunningTime="2025-01-13 21:22:21.23608026 +0000 UTC m=+52.330867375" Jan 13 21:22:22.226737 kubelet[2703]: I0113 21:22:22.225412 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:22.999778 kubelet[2703]: I0113 21:22:22.999627 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:24.062582 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:52210.service - OpenSSH per-connection server daemon (10.0.0.1:52210). Jan 13 21:22:24.110333 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 52210 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:24.112006 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:24.117802 systemd-logind[1504]: New session 18 of user core. Jan 13 21:22:24.121920 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:22:24.301080 sshd[5348]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:24.304584 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:52210.service: Deactivated successfully. Jan 13 21:22:24.306745 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:22:24.307077 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:22:24.308198 systemd-logind[1504]: Removed session 18. Jan 13 21:22:28.984132 containerd[1526]: time="2025-01-13T21:22:28.984085690Z" level=info msg="StopPodSandbox for \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\"" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.024 [WARNING][5387] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0305d93-9c64-4c59-b1c0-353d135c78a7", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7", Pod:"calico-apiserver-8f84b7485-7wvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73165a3dbdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.024 [INFO][5387] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.024 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" iface="eth0" netns="" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.024 [INFO][5387] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.024 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.045 [INFO][5396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.045 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.045 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.053 [WARNING][5396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.053 [INFO][5396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.054 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.063477 containerd[1526]: 2025-01-13 21:22:29.058 [INFO][5387] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.063887 containerd[1526]: time="2025-01-13T21:22:29.063512107Z" level=info msg="TearDown network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\" successfully" Jan 13 21:22:29.063887 containerd[1526]: time="2025-01-13T21:22:29.063537507Z" level=info msg="StopPodSandbox for \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\" returns successfully" Jan 13 21:22:29.064173 containerd[1526]: time="2025-01-13T21:22:29.064135987Z" level=info msg="RemovePodSandbox for \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\"" Jan 13 21:22:29.068780 containerd[1526]: time="2025-01-13T21:22:29.068730428Z" level=info msg="Forcibly stopping sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\"" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.103 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0305d93-9c64-4c59-b1c0-353d135c78a7", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3aa4eb3e218392d9d6bdfd0e044d7248e99b82b143f051f73798129f7ade3fa7", Pod:"calico-apiserver-8f84b7485-7wvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73165a3dbdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.103 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.103 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" iface="eth0" netns="" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.103 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.103 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.122 [INFO][5426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.122 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.123 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.130 [WARNING][5426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.130 [INFO][5426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" HandleID="k8s-pod-network.a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Workload="localhost-k8s-calico--apiserver--8f84b7485--7wvv5-eth0" Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.131 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.135400 containerd[1526]: 2025-01-13 21:22:29.133 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c" Jan 13 21:22:29.135818 containerd[1526]: time="2025-01-13T21:22:29.135447161Z" level=info msg="TearDown network for sandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\" successfully" Jan 13 21:22:29.159583 containerd[1526]: time="2025-01-13T21:22:29.159491766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:29.159704 containerd[1526]: time="2025-01-13T21:22:29.159610646Z" level=info msg="RemovePodSandbox \"a6bfdda17a903ad4cd8f15f55b70c644a026f93c823177e71ce696d5bc27df4c\" returns successfully" Jan 13 21:22:29.160148 containerd[1526]: time="2025-01-13T21:22:29.160126126Z" level=info msg="StopPodSandbox for \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\"" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.194 [WARNING][5448] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0", GenerateName:"calico-kube-controllers-856b764fb4-", Namespace:"calico-system", SelfLink:"", UID:"5e45114b-d853-433a-9798-af8f1f159ae2", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856b764fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242", Pod:"calico-kube-controllers-856b764fb4-jwzq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice738ad16a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.195 [INFO][5448] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.195 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" iface="eth0" netns="" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.195 [INFO][5448] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.195 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.222 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.223 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.223 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.231 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.231 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.232 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.236266 containerd[1526]: 2025-01-13 21:22:29.234 [INFO][5448] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.236266 containerd[1526]: time="2025-01-13T21:22:29.236254182Z" level=info msg="TearDown network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\" successfully" Jan 13 21:22:29.236687 containerd[1526]: time="2025-01-13T21:22:29.236278182Z" level=info msg="StopPodSandbox for \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\" returns successfully" Jan 13 21:22:29.236732 containerd[1526]: time="2025-01-13T21:22:29.236701662Z" level=info msg="RemovePodSandbox for \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\"" Jan 13 21:22:29.236758 containerd[1526]: time="2025-01-13T21:22:29.236738022Z" level=info msg="Forcibly stopping sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\"" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.269 [WARNING][5476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0", GenerateName:"calico-kube-controllers-856b764fb4-", Namespace:"calico-system", SelfLink:"", UID:"5e45114b-d853-433a-9798-af8f1f159ae2", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856b764fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3142057fb1859a2e22b1408f52b0501cc7b75ab7541a901ed8ac5ffff20ea242", Pod:"calico-kube-controllers-856b764fb4-jwzq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice738ad16a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.269 [INFO][5476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.269 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" iface="eth0" netns="" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.269 [INFO][5476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.269 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.290 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.290 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.290 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.298 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.298 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" HandleID="k8s-pod-network.6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Workload="localhost-k8s-calico--kube--controllers--856b764fb4--jwzq6-eth0" Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.300 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.304781 containerd[1526]: 2025-01-13 21:22:29.301 [INFO][5476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312" Jan 13 21:22:29.305177 containerd[1526]: time="2025-01-13T21:22:29.304815716Z" level=info msg="TearDown network for sandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\" successfully" Jan 13 21:22:29.309084 containerd[1526]: time="2025-01-13T21:22:29.308944676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:29.309084 containerd[1526]: time="2025-01-13T21:22:29.308999916Z" level=info msg="RemovePodSandbox \"6a3d576eb1ffaf6b32edd4e5a9b3dc570fc6637b5cbfa2802b01f2623920b312\" returns successfully" Jan 13 21:22:29.309460 containerd[1526]: time="2025-01-13T21:22:29.309424356Z" level=info msg="StopPodSandbox for \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\"" Jan 13 21:22:29.311879 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:52216.service - OpenSSH per-connection server daemon (10.0.0.1:52216). Jan 13 21:22:29.347833 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 52216 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:29.348930 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:29.353877 systemd-logind[1504]: New session 19 of user core. Jan 13 21:22:29.357906 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.345 [WARNING][5508] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44f8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b673efd0-dcd2-4e1c-9b65-6e14b085060d", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f", Pod:"csi-node-driver-44f8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5cb24fae09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.345 [INFO][5508] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.345 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" iface="eth0" netns="" Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.345 [INFO][5508] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.345 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.374 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.374 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.374 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.382 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.382 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.384 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.387334 containerd[1526]: 2025-01-13 21:22:29.385 [INFO][5508] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.387808 containerd[1526]: time="2025-01-13T21:22:29.387373772Z" level=info msg="TearDown network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\" successfully" Jan 13 21:22:29.387808 containerd[1526]: time="2025-01-13T21:22:29.387399132Z" level=info msg="StopPodSandbox for \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\" returns successfully" Jan 13 21:22:29.387949 containerd[1526]: time="2025-01-13T21:22:29.387907172Z" level=info msg="RemovePodSandbox for \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\"" Jan 13 21:22:29.387949 containerd[1526]: time="2025-01-13T21:22:29.387944412Z" level=info msg="Forcibly stopping sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\"" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.424 [WARNING][5541] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44f8k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b673efd0-dcd2-4e1c-9b65-6e14b085060d", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34a74f7100286d77f99e0cf666a7abd1c1327cdfbd079ccd1881cc884b39f49f", Pod:"csi-node-driver-44f8k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5cb24fae09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.425 [INFO][5541] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.425 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" iface="eth0" netns="" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.425 [INFO][5541] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.425 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.445 [INFO][5557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.445 [INFO][5557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.445 [INFO][5557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.454 [WARNING][5557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.454 [INFO][5557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" HandleID="k8s-pod-network.0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Workload="localhost-k8s-csi--node--driver--44f8k-eth0" Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.456 [INFO][5557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.461455 containerd[1526]: 2025-01-13 21:22:29.459 [INFO][5541] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41" Jan 13 21:22:29.461974 containerd[1526]: time="2025-01-13T21:22:29.461485507Z" level=info msg="TearDown network for sandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\" successfully" Jan 13 21:22:29.464411 containerd[1526]: time="2025-01-13T21:22:29.464368948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:29.464487 containerd[1526]: time="2025-01-13T21:22:29.464424988Z" level=info msg="RemovePodSandbox \"0d15c942cace17ce5e7f4c922a1fbdc8d1e74ed4af31f3d9f4bc68ddbb042c41\" returns successfully" Jan 13 21:22:29.464976 containerd[1526]: time="2025-01-13T21:22:29.464948108Z" level=info msg="StopPodSandbox for \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\"" Jan 13 21:22:29.515372 sshd[5492]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:29.520373 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:52216.service: Deactivated successfully. Jan 13 21:22:29.523872 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:22:29.524045 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:22:29.524999 systemd-logind[1504]: Removed session 19. Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.504 [WARNING][5580] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gfwhd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7697a17-2ddc-4998-a5ff-c29dd1a74a22", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95", Pod:"coredns-76f75df574-gfwhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4752949b2b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.505 [INFO][5580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.505 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" iface="eth0" netns="" Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.505 [INFO][5580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.505 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.532 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.533 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.533 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.542 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.542 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.543 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.547369 containerd[1526]: 2025-01-13 21:22:29.545 [INFO][5580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.547822 containerd[1526]: time="2025-01-13T21:22:29.547389685Z" level=info msg="TearDown network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\" successfully" Jan 13 21:22:29.547822 containerd[1526]: time="2025-01-13T21:22:29.547414205Z" level=info msg="StopPodSandbox for \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\" returns successfully" Jan 13 21:22:29.548027 containerd[1526]: time="2025-01-13T21:22:29.547986005Z" level=info msg="RemovePodSandbox for \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\"" Jan 13 21:22:29.548027 containerd[1526]: time="2025-01-13T21:22:29.548022525Z" level=info msg="Forcibly stopping sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\"" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.582 [WARNING][5612] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gfwhd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7697a17-2ddc-4998-a5ff-c29dd1a74a22", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de152add7183bdd6b59db8bd883fa832f88493b8f5b55675694382b5d680dc95", Pod:"coredns-76f75df574-gfwhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4752949b2b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.582 [INFO][5612] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.582 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" iface="eth0" netns="" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.582 [INFO][5612] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.582 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.604 [INFO][5620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.604 [INFO][5620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.604 [INFO][5620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.613 [WARNING][5620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.613 [INFO][5620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" HandleID="k8s-pod-network.2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Workload="localhost-k8s-coredns--76f75df574--gfwhd-eth0" Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.614 [INFO][5620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.618277 containerd[1526]: 2025-01-13 21:22:29.616 [INFO][5612] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353" Jan 13 21:22:29.618704 containerd[1526]: time="2025-01-13T21:22:29.618291659Z" level=info msg="TearDown network for sandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\" successfully" Jan 13 21:22:29.620994 containerd[1526]: time="2025-01-13T21:22:29.620959580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:29.621040 containerd[1526]: time="2025-01-13T21:22:29.621025460Z" level=info msg="RemovePodSandbox \"2bc92849093d40d537cd3b6e2507f101b69f5df5077861e30f6631b1227f1353\" returns successfully" Jan 13 21:22:29.621559 containerd[1526]: time="2025-01-13T21:22:29.621520340Z" level=info msg="StopPodSandbox for \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\"" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.658 [WARNING][5642] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qhvr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30", Pod:"coredns-76f75df574-qhvr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53c7caafa83", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.659 [INFO][5642] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.659 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" iface="eth0" netns="" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.659 [INFO][5642] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.659 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.677 [INFO][5650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.677 [INFO][5650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.677 [INFO][5650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.685 [WARNING][5650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.685 [INFO][5650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.686 [INFO][5650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.690129 containerd[1526]: 2025-01-13 21:22:29.688 [INFO][5642] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.690527 containerd[1526]: time="2025-01-13T21:22:29.690293234Z" level=info msg="TearDown network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\" successfully" Jan 13 21:22:29.690527 containerd[1526]: time="2025-01-13T21:22:29.690318994Z" level=info msg="StopPodSandbox for \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\" returns successfully" Jan 13 21:22:29.690843 containerd[1526]: time="2025-01-13T21:22:29.690806114Z" level=info msg="RemovePodSandbox for \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\"" Jan 13 21:22:29.690878 containerd[1526]: time="2025-01-13T21:22:29.690843394Z" level=info msg="Forcibly stopping sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\"" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.726 [WARNING][5672] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qhvr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cc6bdfdd-cc3e-4a92-962e-c53a92f68c06", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ef3b211f9632cf69039c18b471d0663c76e1e037371ae6bd78cd064b0050e30", Pod:"coredns-76f75df574-qhvr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53c7caafa83", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.727 [INFO][5672] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.727 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" iface="eth0" netns="" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.727 [INFO][5672] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.727 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.746 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.746 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.746 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.754 [WARNING][5679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.754 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" HandleID="k8s-pod-network.bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Workload="localhost-k8s-coredns--76f75df574--qhvr9-eth0" Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.756 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.760070 containerd[1526]: 2025-01-13 21:22:29.758 [INFO][5672] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa" Jan 13 21:22:29.760449 containerd[1526]: time="2025-01-13T21:22:29.760102528Z" level=info msg="TearDown network for sandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\" successfully" Jan 13 21:22:29.764352 containerd[1526]: time="2025-01-13T21:22:29.764305769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:29.764422 containerd[1526]: time="2025-01-13T21:22:29.764364129Z" level=info msg="RemovePodSandbox \"bb9f1d835fcaa5e0191eaf6385dc9d77ffb547ed828ca78461548aaf446bb1aa\" returns successfully" Jan 13 21:22:29.764989 containerd[1526]: time="2025-01-13T21:22:29.764946409Z" level=info msg="StopPodSandbox for \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\"" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.801 [WARNING][5703] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"89475a27-0916-4680-b302-fbf35e837e47", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779", Pod:"calico-apiserver-8f84b7485-fpvqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7324264aca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.801 [INFO][5703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.801 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" iface="eth0" netns="" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.801 [INFO][5703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.801 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.819 [INFO][5711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.819 [INFO][5711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.819 [INFO][5711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.828 [WARNING][5711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.828 [INFO][5711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.829 [INFO][5711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.833776 containerd[1526]: 2025-01-13 21:22:29.831 [INFO][5703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.834152 containerd[1526]: time="2025-01-13T21:22:29.833813575Z" level=info msg="TearDown network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\" successfully" Jan 13 21:22:29.834152 containerd[1526]: time="2025-01-13T21:22:29.833838895Z" level=info msg="StopPodSandbox for \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\" returns successfully" Jan 13 21:22:29.834671 containerd[1526]: time="2025-01-13T21:22:29.834647059Z" level=info msg="RemovePodSandbox for \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\"" Jan 13 21:22:29.834717 containerd[1526]: time="2025-01-13T21:22:29.834678899Z" level=info msg="Forcibly stopping sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\"" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.869 [WARNING][5734] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0", GenerateName:"calico-apiserver-8f84b7485-", Namespace:"calico-apiserver", SelfLink:"", UID:"89475a27-0916-4680-b302-fbf35e837e47", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f84b7485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"486113bcb5f8fa4274ce9318a6306bc56af7239636fd58da7353a85760eb2779", Pod:"calico-apiserver-8f84b7485-fpvqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7324264aca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.869 [INFO][5734] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.869 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" iface="eth0" netns="" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.869 [INFO][5734] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.869 [INFO][5734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.888 [INFO][5741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.888 [INFO][5741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.888 [INFO][5741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.899 [WARNING][5741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.899 [INFO][5741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" HandleID="k8s-pod-network.0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Workload="localhost-k8s-calico--apiserver--8f84b7485--fpvqh-eth0" Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.900 [INFO][5741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.903697 containerd[1526]: 2025-01-13 21:22:29.902 [INFO][5734] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122" Jan 13 21:22:29.904066 containerd[1526]: time="2025-01-13T21:22:29.903730762Z" level=info msg="TearDown network for sandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\" successfully" Jan 13 21:22:29.906488 containerd[1526]: time="2025-01-13T21:22:29.906449533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:29.906564 containerd[1526]: time="2025-01-13T21:22:29.906503333Z" level=info msg="RemovePodSandbox \"0e7c5fc4535a7b200d5703837761e79e2f62fdf9e34ca22f7a3d429140bff122\" returns successfully" Jan 13 21:22:34.524918 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:42004.service - OpenSSH per-connection server daemon (10.0.0.1:42004). Jan 13 21:22:34.559802 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 42004 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:34.561292 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:34.565004 systemd-logind[1504]: New session 20 of user core. Jan 13 21:22:34.574936 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:22:34.710305 sshd[5768]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:34.713537 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:42004.service: Deactivated successfully. Jan 13 21:22:34.716483 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:22:34.717085 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:22:34.718976 systemd-logind[1504]: Removed session 20. Jan 13 21:22:35.948916 kubelet[2703]: E0113 21:22:35.948353 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:39.722073 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Jan 13 21:22:39.750784 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:22:39.752099 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:39.756553 systemd-logind[1504]: New session 21 of user core. Jan 13 21:22:39.762874 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:22:39.919914 sshd[5807]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:39.924055 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:42012.service: Deactivated successfully. Jan 13 21:22:39.926031 systemd-logind[1504]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:22:39.926103 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:22:39.927215 systemd-logind[1504]: Removed session 21.